Author Topic: NAS, Raspberry or Synology.  (Read 8870 times)

0 Members and 2 Guests are viewing this topic.

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #50 on: October 12, 2023, 09:57:03 am »
Both startups and continuous use is hard on the devices, just in different ways.

if it really were as you say, none of my RAPTORs, and none of the disks used in the 2000s laptops, would be operational today.

Not only is startup more intensive on the components, but things like bearings can also need a few turns to get the lube into the right places after standing still for a while.

I think it was true in the early 90's and false after specific lubricants were designed.

There's a company near here in charge of destroying HDDs. Until a few years ago they simply removed the electronics and threw away the mechanics. Now they destroy everything in a hydraulic press. A real shame, but I managed to save a thousand BLDC motors from 2.5" (laptop) and 3.5" (desktop) hard drives.

The funny thing, we are discussing lubrcants for HDDs, but I have never seen a single bearing seized or deformed due to the degradation of the lubricants, and since those HDDs were company stuff, they certainly used them intensively.

Both managers and guys on business trips need something that turns their company laptop on and off even 10 times a day, because if it breaks, they got another one.

Things shift as they heat up and cool down etc... But when constantly running things are warm all the time but much more mechanical wear is placed upon components. So either way if you use something regularly it will accumulate wear one way or the other.

if we talk about RAPTOR, MAV, or SAS (probably also WD Red PRO and EXO class), they are things that are built to wear out completely mechanically in the order of 50 years.

This is probably the worry: likely modern sATA are built with less care, to be cheap, but we are still talking about at least 20 years.

Back to old days: Hibernation on Linux/Apple PowerBook-G3-G4 never worked before 2010, so during those days we all put the laptop in our bag, turned it on during class, and then turned it off again. To then turn it on again in the afternoon for the laboratories. To then turn it off again, and turn it back on on the train. And then turn it off again. And turn it back on at home.

How many cycles did pATA disks see? At least 5 cycles a day, for at least 5 years of college, and people like me kept the laptop for more years after college.

How come everything works perfectly even though those things have over 15 years of cycles?
« Last Edit: October 12, 2023, 10:16:49 am by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4957
  • Country: si
Re: NAS, Raspberry or Synology.
« Reply #51 on: October 12, 2023, 10:47:05 am »
It is not like each drive has X number of power cycles before it dies.

My point is that any kind of use (be it frequent startups of continuous operation) puts some amount of stress on it as compared to a drive sitting on a shelf for 20 years in a reasonable controlled environment. So not like using them in a certain way will magically make all drives last forever. But if keeping them off for most of the life is the use case then a HDD is the wrong storage technology anyway, use a tape drive for that. The point of HDDs is a 'in between tech' between SSDs and tape, long term magnetic data storage that can be accessed quickly and often, so spinning constantly or being spun up every so often, this is what they are meant to do.

When it comes to laptops drives usually die from rough handling anyway. I had to replace a lot of HDDs in peoples laptops.

I never had any of my own HDDs just flat out fail (Tho i did replace 1 or 2 way way back as a precaution because they made weird noises). Does that mean HDDs are 100% reliable and out of the billions of drives in the world none of them fail? Of course not, given a large enough sample size they will fail. However they are very reliable these days, so not like you need to constantly worry about your drive dieing tomorrow. But they do fail sometimes, so buying a super high quality drive and treating it gently so it lasts as long as possible is not a replacement for backups.

Just use your drives however you find it convenient and don't worry about how that might affect the lifespan. It does not really matter that much, in both cases the drives have a small but non 0 chance of giving up the ghost.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #52 on: October 12, 2023, 11:54:44 am »
We are discussing "on/off cycles" as a way to fast wear out the mechanical parts, as I have already told that the "gentile soft start" can reduce the electronic wear out.

Modern (> 2001) HDDs use "fluid dynamic bearings" and this is a relatively rare reason for them to wear so badly that they cannot function properly.

Keeping an HDD off for most of the life is another story, as leaving a bearing still, whether it is sealed or open, exposes the lubricant to what is called "cementing".

This also applies to bicycle hubs, in fact right now I'm dismantling two brand new C-Record hubs that have been stored in a warehouse for over 30 years: they need to be dismantled, degreased, cleaned, and put in fresh lubricant.

For HDDs it's called "stiction", a portmanteau of friction and sticking, which occurs when the armatures that drive flying read/write heads actually get stuck in place and refuse to operate, usually after a very long period of disuse (>10 years at least), which seems counterintuitive but that's it.
« Last Edit: October 13, 2023, 10:49:29 am by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Postal2

  • Contributor
  • Posts: 45
  • Country: ru
Re: NAS, Raspberry or Synology.
« Reply #53 on: October 13, 2023, 07:55:23 pm »
Hard drive I'm using.
NAS in the another room, via gigabit switches. Always cold, fan is set to "auto", but never winding. Full silent. Your NAS must be silent too to be good home NAS.
« Last Edit: October 13, 2023, 08:10:05 pm by Postal2 »
 

Offline Veteran68

  • Frequent Contributor
  • **
  • Posts: 727
  • Country: us
Re: NAS, Raspberry or Synology.
« Reply #54 on: October 13, 2023, 08:11:17 pm »
Do you always keep your laptop "on" too?
Before the advent of SSDs, did you always keep your laptop turned "on" so as not to stress its hard drive?
...
But I'll also tell you one thing, folks: my 10.000 rpm Raptor SCSI HDDs are mounted on a UNIX server, which I certainly don't keep on all the time since it consumes 500 fixed Watts. Taken in 2003, there is no "iddle mode", and HDDs will have about ~4000 starts in 20 years, but have no problems.

Yes, unless they're being stuffed in a bag, my laptops are always on. My work laptop, docked at my desk, is never shut down unless it goes in the bag more than a few minutes. It only reboots for Windows updates or other maintenance that requires it. My two MacBook Pros at home, one of which is on my workbench, are always on unless being transported. The only PC I have that gets shutdown regularly is my main Windows desktop, which has five (5) monitors. If I'm not running a long/overnight process, it does get shutdown nightly.

A NAS is a server. Servers -- at least those that are in use and serving a purpose -- are intended to run constantly, not to be shutdown every day. In addition to my NAS, I have two full size servers in my rack at home that run 24/7/365. One runs a Core i7 and the other is dual Xeons. I also have 3 SBC servers that likewise never shutdown outside of maintenance activities.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #55 on: October 14, 2023, 07:56:27 am »
Yes, unless they're being stuffed in a bag, my laptops are always on

we're talking about laptops that go in a bag, because it makes no sense at all to transport a powered hard disk, potentially with spinning platters.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #56 on: October 14, 2023, 08:13:38 am »
A NAS is a server. Servers -- at least those that are in use and serving a purpose -- are intended to run constantly, not to be shutdown every day.

It's a server, and it is not written anywhere that, since it is a server, it must remain turned on all day every day.

This is simply how you use it.

If you have scientific evidence and/or argument that keeping an HDD in perpetual rotation will wear it out less, than you can give a useful contribute to this discussion, otherwise I will end it here with what I thing about this:

bullshit!
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: NAS, Raspberry or Synology.
« Reply #57 on: October 15, 2023, 06:50:06 am »
Servers -- at least those that are in use and serving a purpose -- are intended to run constantly, not to be shutdown every day.
:o

Servers and appliances exist to provide a service their users need.  Whether they are on or not, depends on the users' needs.
NAS box for thin clients?  Yep, probably needs to run 24/7.
End of the week backup archive server?  Makes sense to turn that power-hog only when needed; plus reduces the chance someone "accidentally" messes with the backups in the mean time.

Like I mentioned before, spinny-disk startup causes wear on the power supply electronics.  (I do not mean specifically PC power supplies, but the entire supply from the wall socket down to the BLDC motor.)  Thermal cycling is a significant part of it, because as I recall, Google found out they could run their HDDs constantly at 40°C/100°F ambient without any degradation in lifetime –– we're speaking statistically here, of course.

If you have 6 or more spinny drives in the same enclosure, it's time to worry about power-on sequencing, too, since the BLDC spin-up current draw is significant.  (I'm still wondering why the darn things have to spin up immediately when power is applied.  It'd be so much easier if they could be powered on, and then a separate "now spin up" message would spin them up.  BIOS/EFI/Firmware already needs to support the drive controller anyway, it'd really be just one more message in the flow.)

Current machines support various power saving levels.  The two we need to consider are idle and suspend.
Idle is when the machine is not doing any significant work, but is ready to immediately respond to requests.
Suspend, or more specifically suspend to RAM, is a state where the processors and most of the rest of the machine is unpowered, but RAM is kept refreshed.
Linux kernel-based suspend is stable on Intel/AMD and ARM hardware.  (There may be some specific cases where hardware bugs or undocumented features prohibit suspend, but they're really, really rare.)

On many ARM hardware, suspend consumes only a fraction of a watt of power.  I already mentioned Odroids as examples.  Nowadays there are even Intel/AMD server-class motherboards that support sub-watt suspend-to-RAM.  As it is a motherboard feature –– namely, the power supplies must be under software control and power saving level change sequencing bullet-proof ––, each motherboard manufacturer decides for themselves how important a low-power suspend feature is.  As it is not something many mass customers demand, the exact power level achievable using typical server-class hardware varies.  On desktop-class machines, and especially mini-ITX and micro-ATX Intel/AMD motherboards, the suspend support gets much more focus, so low-power suspend tends to be more common.

At sub-watt suspend power levels, is there even a difference between "off" and "suspended"?
(Other than a suspended machine being accessible with a couple of second latency, and a machine turned off being inaccessible, unless it has external wakeup support, I mean.)

(Note: When the RAM-based image is put into storage, and the machine fully powered down, we're talking about suspend-to-disk or hibernation.  This is well supported on tablet and laptop boards, and Linux can support it, but it obviously requires proper storage: you don't want to hibernate to a SD-card, for example.  Also, combined suspend+hibernate exists, in which case the initial suspended image is stored on disk, but power-off is delayed in case the machine is needed within some time window.  Waking up from hibernation is "slow", because it involves a full hardware boot-up; only, instead of loading an OS, the hibernated image is loaded.  A major part of the wake-up delay is hardware (BIOS/EFI/Firmware) boot-up, with the loading of the full system image from storage also important.  The activation of the hibernated image is fast, basically the same as from suspension.  Thus, hibernation wake-up time is dictated by hardware and firmware, not the OS.)

Because only the RAM (and some low-power subsystems) are powered during suspend (suspend-to-RAM; in hibernation, the entire machine is powered off), some sort of a notification is needed for when the system needs to come back up to idle/active power levels.  In PC-class Intel/AMD hardware, power supplies and motherboards keep one of the +5V lines powered even during suspend, so that USB HID events (detected by the motherboard, involving no CPU activity!) will wake up the machine.  Similarly, many network interface (cards and built-in ports) use that line to power their receive sides, so that when a suitable Wake-On-LAN packet is received, the NIC tells the machine to wake up.  (Again, WOL involves hardware only.)
WOL support for ARM hardware varies, but it present on for example the Odroids I already mentioned.  (And I mentioned them only because I have one, and know of them; I am NOT implying they're the best, or even the only ones to do that.  They're examples and suggestions to look at, nothing more.)

In particular, basically all ARM-based NAS boards, and all ARM-based router/switch boards with suspend capability (low-power modes where RAM is kept refreshed but CPU and some other subsystems are not), have had WOL support.  Some even expose a wakeup pin (although I suspect it is more common to have it as an undocumented pad or test point; but that's just unfounded suspicion, not knowledge).  Whether USB is powered during suspend or not varies, mostly whether the board is able to power off the USB ports under software control at all.  (When it is supported, it's also supported by the Linux kernel; take a look at /sys/bus/usb/devices/usbN/power/ pseudofiles.  This file-like interface (under /sys and /proc) is how the Linux kernel exports things.  They are "pseudo" in the sense that they do not actually exist at all, not even in RAM: the kernel only generates the structures when there is some process actually examining things.)

So, what does all that above waffle mean when one considers a NAS box with spinny-rust HDD drives?
  • You want your OS and related files on Flash (M.2 SSD).
    Because software will do a lot of file accesses whenever not suspended, having an SSD for the OS minimizes the impact on the HDDs.
    There are lots of tunables for such accesses in Linux and BSD –– even the act of when and how often file access timestamps are modified when the files are read is configurable.  You don't want to have to mess with those on top of everything else to get the NAS box to perform best to your needs, and putting the OS and logs etc. on a separate SSD drive gives you that option.
  • If you have a copy of the OS SSD you keep as a backup, updated say once a week, you can recover from Doh! moments by swapping the two.
    It also means the HDDs and the NAS system itself are decoupled, letting you transfer HDDs in-and-out and between NAS boxes, for example when upgrading hardware, with minimal effort.  If the OS is on the same HDDs as your data, when anything goes wrong, pain will ensue.
  • You want suspend (suspend-to-RAM) and Wake-on-LAN support, with a sub-1W suspend power use.
    This way, you do not need to power down the box unless you do not want anyone to be able to access it: at 1W, the power draw is just 8.8 kWh/year.
    My suggestion is to only power it down when you are away for several days, and do not want it accessible at all.
  • Suspend works perfectly with HDD spin-up/down/parking.
    HDDs will automatically spin up when read from or written to, and most can be configured to automatically spin down when not accessed for a specified time.
    If your OS and related files are on an SSD, HDDs will only get spun up when information on them is actually accessed.
  • On plain HDDs and software RAID setups, you'll want to run the smartd daemon (Linux, BSD) to periodically read each entire drive when otherwise idle, so that the drive hardware itself can detect deteriorating data and relocate failing blocks.
    Because the rest of the system doesn't care about the data itself, only whether the drive reports success (with the data), such scanning is integrated into most RAID controllers, and you don't need to run such a daemon; you just run some kind of host RAID controller management daemon instead.
    Now, to limit the wear on the HDDs when they're expected to be spun down for the majority of the time, you do need to configure smartd smartly: you want it to scan one drive at a time (to limit "idle" power use), as continuously as possible (when the machine is otherwise idle), with each drive fully scanned in a given period: I prefer about a month, but it varies depending on drives and people.  What you don't want, is having to spin-up a disk just for smartd scanning.
    If you run on mains electricity, you may prefer to set smartd to run only at night.  If you run on solar, you may prefer to set smartd to run only during the day, when there is plenty of energy available.
  • The above is not controlled by any script or GUI, but by implicitly, timing operations sensibly.
    HDDs do not know or care whether a given access is because an user wants to open a file, or because smartd decides it's time to scan the drive.
    HDDs spin up when needed, and spin down when their internal controller decides they should, or (for ATA/SATA) when OS or userspace sends a spin-down-now command to the drive.  On Linux and BSD, you can typically use hdparm to send such commands, or configure drive idle spin-up/spin-down parameters.  Most HDDs also have internal temperature sensors, which you can read with hddtemp.  So, to make things happen like I described above, you need to consider the entire system and configure each subsystem in a suitable fashion.  (I do not know if e.g. TrueNAS core, a Linux distribution dedicated for NAS boxes, makes such configuration any easier.)
  • In Linux and Android, the kernel has one or more power state governor, CPU frequency scaling and power state management subsystem, that can be configured from the userspace.  Not all hardware supports all possible governors, and not all ready-made kernels have more than one governor compiled, so to be able to select the one you want and best matches your use case, you may have to experiment and even recompile your own kernel (for testing; I do recommend using distro kernels for appliances if possible, because those get proper maintenance).
  • Sensible "idle" power draw is nice, but when suspend and HDD spin-up/down are configured well, it only mixes with the active power draw in some duty ratio (depending on the behaviour of the above-mentioned governor), and is much less important than low suspended power draw.
    Essentially, the NAS box will only be idle when active bursts occur within (idle time limit before suspend).  Do not forget that you can configure a different spin-down time limit for the HDDs than the suspend limit: it is perfectly okay to have the NAS box that wakes up from suspend within a couple of seconds, to suspend rather aggressively (say, after 10 minutes of idle), but at the same time have the HDDs spin down only after 30+ minutes of no accesses.
    I like that, because it aggressively reduces power use and running temperatures, but is very gentle on stressing the HDDs by avoiding unnecessary spin-ups/downs, and –– as far as I understand –– should also maximise the system lifetime, stressing it minimally, overall.

Okay, but we were talking about servers, weren't we?

The same argument applies to servers.  If you have a motherboard that supports <1W suspend-to-RAM (as measured at the wall socket), and wakes up from suspend within a second or two –– easily achieved in Linux using Intel/AMD/ARM hardware, when the OS is on an SSD; with OS on a HDD, some accesses will always occur during the wakeup process and thus wakeup delayed by the HDD spin-up time ––, then the same energy savings apply, and hibernation or fully powering down the server is no longer necessary.

If you have a server motherboard that does not have low-power suspend –– typically the case with legacy hardware! ––, then hibernation or powering off the darn thing when not needed is definitely a good option.

If you have such a server, one trick that few people realize is easily available, is to use a low-power ARM SBC with Ethernet connected to the same switch and LAN (and VLAN) as the server, just to receive wakeup-on-LAN and other commands to control server wakeup-from-hibernation.  Power-down is not a problem, as even legacy server motherboards have software power-down support, and it is required anyway for proper hibernation; but you might want to add a normally closed (openable) contactor to cut off power for remote recovery from lock-ups.  There is a soft power-on button on all server boards, which you can connect in parallel with an optocoupler (might need a transistor for signal inversion, as the pin is normally not connected, and connects to ground or voltage when the button is pushed) to an SBC output pin, so that the SBC can wake up the server from hibernation safely.  A current sensor on a suitable server power bus would tell the SBC when the server is fully shut down (it's difficult to determine reliably otherwise), plus you can often sprinkle some temperature sensors read by the SBC to determine issues; the combination of both plus some watchdog software would allow even lockup detection automatically.

So, as someone who has practical experience in HPC and servers, both software and hardware, I do fundamentally object to the idea that a server or appliance needs to be running 24/7.  To me, they need to be running when I need them; and when I don't need them, I want them to not waste power (with my utterly arbitrarily set limit at around 1 watt).  There are many ways of making that happen, including "hacks" like adding a low-power cheap ARM Linux SBC as your power controller for hardware that doesn't support power save or suspend, so use them when needed.  Do not just adopt silly rules of thumb like "a server is on 24/7" without carefully considering the implications.
« Last Edit: October 15, 2023, 06:52:50 am by Nominal Animal »
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #58 on: October 15, 2023, 11:38:19 am »
Like I mentioned before, spinny-disk startup causes wear on the power supply electronics.  (I do not mean specifically PC power supplies, but the entire supply from the wall socket down to the BLDC motor.)  Thermal cycling is a significant part of it, because as I recall, Google found out they could run their HDDs constantly at 40°C/100°F ambient without any degradation in lifetime –– we're speaking statistically here, of course.

As far as I understood and observed, sure spinny-disk startup causes wear on the power supply electronics(A), but they do so slowly that if you look at the statistics they are NOT that 5% of the problem, that is, the probability that the very thin layer of lubricant (B) applied over the platters "may fail" as a bearing for the flying read/write heads is much higher as it ages much more quickly locally and can cause a head to crash at that point

p(may_fail(A)) = 0.05
p(may_fail(B)) = 0.70

B: it seems that the vibrations and the very high data rate are prolonged, perhaps with the heads always moving on the same LBAs for long data sessions, accelerating this process much faster.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6264
  • Country: fi
    • My home page and email address
Re: NAS, Raspberry or Synology.
« Reply #59 on: October 15, 2023, 11:47:01 am »
The mechanical vibration/noise during spin-up probably causes more overall wear than the current draw.

(Funny side note: Decades ago, I once bought a Maxtor IDE drive that was so out of balance, it would not stay put when spun up on on top of a horizontal flat table.  Got it replaced immediately.)

As to the platters, I thought the read heads use an air cushion on top of the platters, no lubricants?
(A small area is reserved for parking, and never contains any data, so that read/write heads are always positioned over the platters, even when powered down.  In the parking position, the head assembly has additional supports that move the heads further away from the platters, to ensure they physically cannot crash to the platter surface when powered down.  The spinning of the platters creates the air cushion, and the heads only move from the parking area when the platters are spinning and the air cushion present.)

The head assembly does move, and any lubrication on that (head pivot?) could age badly (especially if unused) and become sticky.  If that happens, the controller within the drive will fail to locate the correct sector (stiction making small head moves hardest), and will error out.  Many drives do an emergency park of the head assembly in an effort to get the head assembly "unstuck" (and in many other internal error conditions), just like when power is lost; this creates the very distinctive 'click' sound.
 

Offline Microdoser

  • Frequent Contributor
  • **
  • Posts: 423
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #60 on: October 15, 2023, 12:36:51 pm »
I recently checked out spinning HD reliability (because they just seemed to fail more often in my experience) to find that a drive made in recent years is 50% more likely to fail, and fail within 3 years than one made before something like 2015 (can't remember the exact year), 3 years being the time that it's either going to fail by, or last a very long time (barring abuse).

Of course, SDDs are far more reliable these days than a mechanical drive.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #61 on: October 15, 2023, 01:21:35 pm »
The mechanical vibration/noise during spin-up probably causes more overall wear than the current draw.

Sure, the spin-up makes a lot of vibration and noice, but for short time. It is clear that this is a problem, but not a primary one, in fact p(may_fail) = 0.09, still an order of magnitude lower than media degradation, which happens after the spin-up, so a greater probability p(may_fail) = 0.70 exposed to much longer times.

The question here is: what degrades the platters so much to the point of cracking the flying read/write heads?

I have given two reasons, but there are certainly others, and in this case the vibrations produced by the spin-up contribute yes, but to a minimal extent  :-//
« Last Edit: October 15, 2023, 10:47:18 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Postal2

  • Contributor
  • Posts: 45
  • Country: ru
Re: NAS, Raspberry or Synology.
« Reply #62 on: October 15, 2023, 10:03:11 pm »
I'm still wondering why the darn things have to spin up immediately when power is applied
Because firmware for disk is written on the disk.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #63 on: October 15, 2023, 10:37:00 pm »
I'm still wondering why the darn things have to spin up immediately when power is applied
Because firmware for disk is written on the disk.

On the HDD flash/prom.

And on the Raptor and Fujtsu MAW SCSI HDDs, the firmware allows you to choose whether or not to enable soft-start. There is a special jumper to set. At cold start the firmware checks if the jumper is set, if it is the bldc motor accelerates with a much softer curve - aka gentile spin-up - taking much longer to reach the final rpm.

even if we are talking about an impulsive event lasting a few seconds
    soft-start =
            less vibrations (most significant)
            less friction (not significant, but it's there)
            less heat (not significant, but it's there)
            less current peaks (significant)
            less stress on the power parts on the PCB (significant)

For the 10000 rpm, on Raptors HDDs there is a second jumper to choose a slow-soft-start of 1.9 seconds from zero to full spinning speed, which requires also setting the SCSI HBA so that it in turn delays the "device ready" check, typically with active waiting within a timeout.

I think that for the most of s/ATA HDDs those jumpers simply don't exist, I've never understood why.
« Last Edit: October 15, 2023, 10:46:19 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #64 on: October 15, 2023, 10:48:11 pm »
As to the platters, I thought the read heads use an air cushion on top of the platters, no lubricants?

Actually they use both, and both tricks.

When you get the highway, you enter a special lane that is used to accelerate. Similarly, the outermost area of the platters is used (or rather should be used) for the spin-up, and here you understand why the vibrations at the start are not really a problem if the head parking has been done correctly rather than having suddenly pulled the plug (usually due to a blackout with no UPS in use).

Smart firmwares (like on my Raptor, and Fujtsu MAW, both enterprise level SCSI) trie to move the arm to the outer area BEFORE spinning the platters under acceleration. There is a special command "heads park" for this.

Air cusions help, but the special lubrificant spread on the platters works as a "bumper" for when the platters are already rotating and rotating at maximum speed of uniform motion, and it's the last countermeasure to avoid crashing a flying read/write head for an external (even if little) shock.

It's in use since 2009, so applied to HDDs of 320..512GB up, and works quite well, but it wears out much quickly ... in theory you could also replace it with fresh lubricant (assuming you can find of that type on the market)
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #65 on: October 15, 2023, 10:48:51 pm »
The head assembly does move, and any lubrication on that (head pivot?) could age badly (especially if unused) and become sticky.

The engine bearings are of the "fluid dynamic" type, they are very advanced in technology vs those used in the 90s, and so is the chemical composition of the lubricants.

As I understand it, these special lubricants begin to degrade after 10 years of non-use, and since then the degradation increases in geometric progression.

* * * conclusion * * *

My position is:
- turning on/off HDDs doesn't wear out them too much
- you can feel comfortable turning them on/off even several times a day when needed (without exaggerating)
- you have to turn them on at least a couple times a year and keep them in rotation for at least a couple of hours

the things you really need to worry about are:
- avoiding vibrations and shocks, especially while they are rotating
- avoiding making them work in environments that are too humid
- avoid having them work in very dusty environments, to preserve the air-filter, which otherwise may fail
- avoiding stressing them too much with data rates that are constantly too high
- avoiding making them work at temperature too high (>40C) or too low (<10C)
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Postal2

  • Contributor
  • Posts: 45
  • Country: ru
Re: NAS, Raspberry or Synology.
« Reply #66 on: October 15, 2023, 11:40:49 pm »
On the HDD flash/prom.
No. See Beagleboard's AM335X techref - and you will understand, how modern SOC booting.
« Last Edit: October 15, 2023, 11:49:10 pm by Postal2 »
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #67 on: October 16, 2023, 10:41:23 am »
On the HDD flash/prom.
No. See Beagleboard's AM335X techref - and you will understand, how modern SOC booting.

As far as I know, I have seen these possibilities
  • #1: boot from external parallel flash/rom ( <2006 )
  • #2: boot from an external serial (SPI) flash/rom
  • #3: boot entirely from an internal flash/rom
  • #4: boot from an internal flash/rom, then look for an external SPI flash
  • #5: boot from an internal flash/rom, then look for tftp-boot
  • #6: boot from an internal flash/rom, then look for RS232-boot

e.g.
The IBM/Amcc PPC40x SoCs are of the type #1
The Atheros5-9 SoCs are of the type #2
The Mediatek/Arm SoCs are of the type #4

For the integrated controller on a HDD, I would expect type #2, or type #3

My Seagate Barracuda 7200.12 s/ATA HDDs have programmable firmware on a flash, I've never looked at the PCB, so I don't know if there is an external SPI flash chip, but I know you can update the firmware over the the s/ATA channel (via Seagate's software) and have a serial console/lvTTL type on a couple of pins on the back connector.

So what's your point? I did not get it :-//
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Postal2

  • Contributor
  • Posts: 45
  • Country: ru
Re: NAS, Raspberry or Synology.
« Reply #68 on: October 16, 2023, 11:10:38 am »
On the HDD flash/prom.
No. See Beagleboard's AM335X techref - and you will understand, how modern SOC booting.
So what's your point? I did not get it :-//
I suggest you think about why the mechanical subset is replaced “hot” only (to retrieve data).
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #69 on: October 16, 2023, 01:56:41 pm »
I suggest you think about why the mechanical subset is replaced “hot” only (to retrieve data).

Again it makes no sense. No one replaces anything these days, the entire hard disk is usually thrown away and replaced under warranty; rarely if the damage is limited to the PCB only and platters are in working order(1), only the PCB is replaced to retrieve the HDD data, which are stored on platters!

So, it makes perfectly sense as I wrote above: for the integrated controller on a HDD, I would expect type #2, or type #3, this way if there is a firmware bug (as happened with my Seagate Barracuda), it can be solved without the disks having to go back. The user can download the software and update the flash of his/her HDD.

-

From what you wrote you can't understand anything, and it almost seems like you want to say that the stage2-firmware is written on the platters, which - frankly - would be bullshit for both technical(1) and commercial(1) reasons.



(1) which, according to what I wrote above, has less probability to fail, while platters have the highest probability to break
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Postal2

  • Contributor
  • Posts: 45
  • Country: ru
Re: NAS, Raspberry or Synology.
« Reply #70 on: October 16, 2023, 03:15:40 pm »
I suggest you think about why the mechanical subset is replaced “hot” only (to retrieve data).
and it almost seems like you want to say that the stage2-firmware is written on the platters
Some blocks of data needs to be read from the disk before it's start. It's not a secret and widely known.
only the PCB is replaced to retrieve the HDD data, which are stored on platters!
Of course, you can change controller easily - because critical data will be taken from disk when they not corrupted.

Also I know that pcb without disk will answer via uart only. This behavior points to "stage 2" on the disk.
But modern SOC really have place inside own flash. You can check PCB without disk via SATA and report here.
« Last Edit: October 16, 2023, 03:39:31 pm by Postal2 »
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #71 on: October 16, 2023, 03:39:53 pm »
Some blocks of data needs to be read from the disk before it's start. It's not a secret and widely known.

Oh, widely known, but you vaguely mentioned a SoC. Can you give me a technical document about that?

Otherwise, let me say what I think: I think you're making a crazy mess, the only data that is read from the platters is the alignment data, and this is something you can do even with a very slow soft-start.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Postal2

  • Contributor
  • Posts: 45
  • Country: ru
Re: NAS, Raspberry or Synology.
« Reply #72 on: October 16, 2023, 04:27:25 pm »
Some blocks of data needs to be read from the disk before it's start. It's not a secret and widely known.
the only data that is read from the platters is the alignment data
May be. But in that case access via SATA to naked pcb will be possible. It is not true.

I guess when disk go to fabricate, some special machine connect him, load him (mechanical part only). And pcb is common for all disks, nothing special content in it. Because set firmware to pcb is not good for mass-production.
« Last Edit: October 16, 2023, 04:46:07 pm by Postal2 »
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16620
  • Country: us
  • DavidH
Re: NAS, Raspberry or Synology.
« Reply #73 on: October 16, 2023, 05:27:59 pm »
I looked at using a Raspberry Pi for NAS duty a couple years ago.  There are stackable expansion boards to support 2.5 and 3.5 inch SATA drives:

https://geekworm.com/collections/nas-storage

I ended up repurposing my old x86 workstation running Windows 10 with Storage Spaces because of its versatility with adding and removing drives.  FreeNAS worked great but was not as flexible.

but what's the point of leaving a NAS exposed to the network, therefore exposed to a lot of attacks just because you're lazy?

The network?  Why have only one?

I keep my NAS on a separate network which has no route to the internet.
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: NAS, Raspberry or Synology.
« Reply #74 on: October 16, 2023, 08:00:31 pm »
May be. But in that case access via SATA to naked pcb will be possible. It is not true.

On my Seagate Barracuda you can update the Flash via the s/ATA link, and you can use the serial RS232 channel to debug and manually relocate clusters. Many things can be done, which are done in the factory and which are not fully documented.

I guess when disk go to fabricate, some special machine connect

on that serial cable for sure! It's its purpose! it wasn't made for the user, who shouldn't even know it exists

set firmware to pcb is not good for mass-production.

The controller-chip on the HDD boots from the internal rom, so once soldered onto the PCB it is able to speak "s/ATA"; an industrial machine can connect to the s/ATA link and serial cable to program the SPI flash with the firmware.

The platters come from a different machine, with preloaded sync patterns. As first step, the firmware must map the clusters, this is done during the early QA phases, during which the HDD is tested and must pass at least 96 hours of operations.

It may be discarded if the platters do not have enough spare clusters; and it's still not enough to be able to say that an HDD has passed the infant mortality tests.

Surely, if it does not break after 96 hours, the fluid-dynamic bearings and relative lubricant, as well as read/write flying heads, have been assembled correctly, but for the rest, there are company { Seagate, Hitachi, IBM, WD, .. } internal techniques and QA testing policies that I don't know about  :-//
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf