Propably my PC is not able to process the informations fast enough due to error correction code due to distorted current causing bit errors.
That is not possible. The hardware does not work that way.
Again, if there are problems that get through your power supply, they will cause your computer to crash. Current x86-64 hardware (running Intel or AMD processors) does not have the facility to even slow down in case there is a power supply issues; it either works, or doesn't and crashes (locks up, most often).
It does work like that according to -> https://passat.crhc.illinois.edu/hpca_15_cam.pdf + https://scholar.google.pl/scholar?hl=pl&as_sdt=0%2C5&as_vis=1&q=error+correction+code+latency&btnG=
Modern PCs don't crash because of error correction systems, bsods are not that common nowdays.
Those links are utterly irrelevant. First, because current crop of AMD and Intel processors used on typical gaming computers do not even support error correction (ECC) for main memory. Second, the latency caused by error correction codes is constant; they are not "switched on and off" as needed.
Simply put, error correction codes cause a constant slowdown. Whether they do correct an error or not, does not affect the time taken.
My internet connection cannot be wonky, because I know many decent players from my city using same provider and they don't experience any issues at all and I saw it personally.
Incorrect. Depending on what kind of service you use –– LTE, ADSL, DOCSIS, Ethernet to the premises ––, a borderline device on your end (either in your apartment, or if you use ADSL, DOCSIS, or Ethernet to the premises, the wiring or the switch in your building) can explain the observed connectivity issues.
For example, I happen to have Ethernet to the premises, i.e. an Ethernet trunk switch and switchboard in my building, with the Ethernet ports exposed in my electrical panel. I use a short patch cable to connect that to one of the four Ethernet connectors around my apartment. The building is connected to a local junction via fiberoptics (a fiberoptic switch for this neighborhood), and from there to the national trunk via further fiberoptics.
If I use a router or switch and it glitches, or the router in my building has a glitch, a typical symptom is a lot of
lost packets. The ping time –– or more properly, round-trip time –– seems okay, but the actual problem is that too many packets are lost, so that
retransmissions occur often, and that increases the observed ping time 2× to
N×. (Nasty internet service provides also insert RST packets, which disconnects an established TCP connection, to reduce the bandwidth used by heavy users.)
The test comparing your gaming setup performance when offline (and that means disconnected cables, not just "I'm not using the net right now"), to when online, is all you need.
If the problems are only fully reproducible when online, then it is your particular internet connection at fault. It does not mean that your internet service provider is shit, or that your friends elsewhere in the same city using the same internet service provider should see the same issues, because the issue could just be faulty hardware. Or a low-quality optical connection in one of the switches. A proper network test by an engineer would pinpoint the issue.
You could test TCP and UDP packet loss to a remote server, to check the exact round-trip times and percentage of lost packets; both raw, and when tunneled (encrypted). I don't know what software you'd use on Windows, because I don't use Windows, and I myself would check those things with code I'd whip up myself in a few minutes.
however inputs are as slow as online.
You use USB for your gaming controllers, right? I wonder if your Windows setup respects the USB HID 1ms interval, or forces a longer one.
Basically, USB HID, Human Interface Device, is a way for each keyboard/mouse/joystick/gamepad to request the host computer to reserve
N 64-byte slots per second, with the maximum being
N=1000, or one slot every millisecond. This yields at most one millisecond latency per event – change in joystick orientation, button state, keypress or release, mouse movement, etc. However, the operating system, in this case Windows, can refuse, and only give the device fewer slots per second than it requests; for example, 16 slots, which yields 62.5 millisecond latencies, which is easily observed.
I do not know how to verify this in Windows. It can be due to hardware –– too many HID devices on the same root USB port ––, or it can be a configuration thing (although I do not remember there ever being such a configuration knob for Windows). If you are using a long chain of USB hubs ending with a single USB cable to your motherboard, or perhaps two using stacked ports, split them, so that your controllers are connected to different root USB ports on your motherboard. This ensures there is maximum number of HID slots available.
(I don't know which software you use in Windows to explore the USB tree. In Linux, the /sys/bus/usb/ pseudo-tree contains this information. USBView?)
In Linux, the USB traffic is easily tracked and dumped using Wireshark, and a snapshot with timestamps will quickly tell if there is a hardware or USB issue. A quick web search says USBPcap should be able to do the same in Windows. Checking the interval between consecutive events from the same controller will tell if this indeed is the cause for the input latency. The timestamps will have "noise" because the software does not record the timestamp at the exact moment the packet is received (there is a small delay until one of the processor cores gets to handle the incoming packet).
For example, if the problem was that for some reason, a controller was only given 32 slots per second (i.e. 1000/32 = 31.25 milliseconds intervals between events), and an event occurred at time 236.220115 seconds, all events from the device would occur at time 236.220115+
K/32 seconds (plus minus say 0.001000 seconds), where
K is an integer.