0 Members and 1 Guest are viewing this topic.
You mentioned that although you already have your Synology NAS you mightbe using this one for something else. Beelink's Mini ME seems to haveunder-powered PSU and/or their single 3.3V rail is not suitable tohandle multiple SSD drives. Especially be careful if you install morethan 4 drives and plan using them simultaneously e.g. in a RAIDconfiguration.---I installed 6 WD Red SN700 2TB SSDs, changed OS to TrueNAS since it'smore suitable for the job and configured all drives to work as a singleRAID storage.Initial testing looked promising even though thermals weren't that good.55-58 deg C in idle for a low power device while not horrible wasn'tgood either. All drives consumed too much energy while being idle.Each reported entering P4 (sleep) mode, which according to WD datasheetshould make each that model consume as little as 5mW. Seems however thateach was pulling over 1W while doing nothing.I don't know if that's issue with Beelink's hardware, BIOS, NVMe driverin OS, some WD shenanigans or any combination of the above.With 6 drives installed at idle I measured 15-17W on 240V side. Withoutdrives installed it was 5-6W.Aside from that it seemed fine so I decided to give it a go and do somefuther testing. I configured everything and began copying data to myshiny, new NAS.After copying few GiB of data from my PC to ME mini ZFS pool crashed as2 drives failed simultaneously. System logs mentioned drives enteredD3cold state and couldn't get out of it so drives were inaccessible:Aug 31 00:24:03 nas kernel: nvme nvme0: controller is down; will reset:CSTS=0xffffffff, PCI_STATUS=0xffffAug 31 00:24:03 nas kernel: nvme nvme0: Does your device have a faultypower saving mode enabled?Aug 31 00:24:03 nas kernel: nvme nvme0: Try"nvme_core.default_ps_max_latency_us=0 pcie_aspm=off pcie_port_pm=off"and report a bugAug 31 00:24:03 nas kernel: nvme 0000:03:00.0: Unable to change powerstate from D3cold to D0, device inaccessibleAug 31 00:24:03 nas kernel: nvme nvme0: Disabling device after resetfailure: -19Aug 31 00:24:04 nas kernel: zio pool=poolvdev=/dev/disk/by-partuuid/e3a8c2cf-48bf-43f3-8d0f-18b2b062a6d7 error=5type=2 offset=152168439808 size=4096 flags=3145856(...)Reboot helped and since I didn't care about that data, I cleared ZFSpool state to make TrueNAS show all-green again.I began copying again and same thing happened but this time 1 drivefailed and to make things worse: a different one to previous two.I had it running in idle (or at least close to idle) for few hours andit was fine.I tried that multiple times and each time situation repeated. Every time1 to 3 drives "failed". Mostly random, but I very often same 2 portsfailed. I tried switching SSD between ports to rule out that specificdrives are problematic. Again same ports failed so that showed that it'smost likely not the drives themselves. I was also doing read/write testsbefore that at their full speed on single drives and it was also fine.All drives also passed SMART tests.Since it happened only during load it seemed more like PSU problem thananything else.I made a simple test rig using a third-hand tool to hold oscilloscopeprobe on 3.3V line going to one of M.2 ports. Luckily the way drives areoriented makes it easy to access those pins. Fact that 3.3V lines aregrouped on connector edges also helped a lot. I just placed my probebetween two adjacent 3.3V pins. That made whole rig stable enough to dolonger testing.Already during OS boot I observed voltage drop below 2.5V for at least140us. You can see that in attached boot_voltage_drop.png screenshotfile. Time base was too short to see how long it really was.During copying data to NAS voltage instability seemed smaller, but alsowent below 3.0V. Keep in mind that I copied data over a 1Gbps ethernetconnection so writing speed was only around 100MiB/s. This is way belowwhat those drives can handle so I assume average power usage wasn't evenclose to maximum.This can be observed in voltage_drop_during_copying_port3.png file. Ienabled persist mode to see what was happening from past 5 seconds.Please forgive my newbie setup.According to SSD manufacturer this model is rated up to 2.8A (peakpower, 10us) at 3.3V. That totals to peak power usage of a bit over 55W.I tried to find voltage regulators near ports, but couldn't see anythinglike that. I did see 6 bigger capacitors. Probably bypass ones for eachport. I didn't want to detach PCB and take heat sink off so I might havemiss something.Curious, I buzzed-out all 3.3V lines on M.2 ports using my multimeterand it seems that all 6 ports share a single 3.3V bus.This suggests that they use a single power source for all 6 ports.Perhaps even for the whole system including wireless card, eMMC and so on.That would explain why issue happens more or less at random during load.Sadly I don't see any easy way to measure 12V line coming out of PSUwithout disassembly. I wonder whether it's just 3.3V source that isunder-powered or the whole system is. Or both.Main PSU is rated for 12V @ 3.75A so just 45W. This doesn't seem enoughfor a system sporting six M.2 ports meant to be used for SSDs.That makes me wonder how did they test this thing.I followed few suggestions given by Beelink's support. Running OS withPCI power management effectively disabled (same thing system logsmentioned) and re-seating all drives. Nothing helped.One suggestion was to install the OS on 4th SSD port - the faster one.While this workaround (as I cannot call this a solution) might seeminglyhelp by making one of SSDs (and eMMC) to be idle most of the time, theremight still be situation when all 6 drives would operate at the sametime: storage ones and OS doing something like writing logs etc. Iconsider this workaround to be especially bad as it gives a false senseof security.Seeing such instability I doubt even 5 drives would work ok. To medevice is defective. I can't trust it so I'll ask for an RMA.I just wanted to let you know since you said that you might be usingMini ME to store your data. Even if it wouldn't be your main NAS,loosing data could be annoying.