Please come back once you had to maintain a large software system that has to be both multi-platform and support multiple toolchains (e.g. Visual C++ and gcc/clang).
[...]
When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
You might be surprised. I shipped some USB 2.0-based hardware at a point in time when USB 3.0 was still relatively new. It turned out that my firmware had a malformed device descriptor entry that caused Windows to BSOD immediately after the device was plugged into a USB 3.0 port. The first customer who reported it blamed Windows, not me. "Hey, just thought you'd like to know about this," was how he phrased it, rather than, "Hey, dumbass, fix this stupid thing or refund my money," which would have been entirely appropriate.
With all the quality problems associated with Windows 10, I suspect that this scenario would play out exactly the same way today. It's a shame that Linux isn't better-positioned to take a share of the desktop market away from Microsoft.
* Lack of compatibility between Linux distros - the are actually compatible fairly well but if you are worried about this, pick one that is well supported and stick with it (and software distribution issues are addressed below).
* Systemd bashing - can we, please, finally drop this nonsense fight? Most people who complain about it here seem to do so basically only because:
a) it is different than what they were used to
b) it is "not unix philosophy"
Lennart Poettering is an ass - , that's more a political than technical discussion
c) it has to be some evil conspiracy pushed on everyone by Redhat - - it actually improves compatibility between the distros (complaint above - before every distro used to have different init setup, different init scripts, different way of handling system services and supporting that when writing and distributing software was a nightmare).
Also most people have no clue what systemd really does and why (no, it isn't just an init system and there are good reasons for it). Yes, it is different than just a bunch of scripts symlinked from /etc/rc.d but it is not that different or complicated to use and there are plenty of advantages (such as automatic supervision of your services).
When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
It's a shame that Linux isn't better-positioned to take a share of the desktop market away from Microsoft.
I don't think Linux will ever be ready for general consumer use unless the developer community can put individual ambitions and vanity behind, and can agree on one "ready to use" Linux distribution. (But then, where would be the fun in that? )
get some laughs reading these thread, thanks all for that!
In fact, now you need to watch to a 15 minute Youtube video a nerd has recorded just to know that. I cannot see the point at all in:when all needed is small number of written lines
- some guy spending hours recording a video
- thousands of people spending 15' watching that video
Regards.
So *please* if you work on programs which have their roots in AT&T/BSD Unix, don't break existing behavior.
when the build process fails on a large project with a page full of dependencies with a large range of supported versions you're up shit creek regardless.
These large projects need to optimize for the common case where you can automate/hide the complexity of configuration of the build process, they need something like autoconf. If it makes life harder when it does fail, well sucks to be you.
The code was so buggy, that I set things up so that if a user typed "foo" it looked in a table of user ids and program versions and if there was a version for that user ran that instead of the regular version. I had to do that because I was often fixing different bugs in the same program during the course of a single day. Once everything was tested, it became the standard version and the user specific versions went away.
The code was so buggy, that I set things up so that if a user typed "foo" it looked in a table of user ids and program versions and if there was a version for that user ran that instead of the regular version. I had to do that because I was often fixing different bugs in the same program during the course of a single day. Once everything was tested, it became the standard version and the user specific versions went away.Problem is, you don't seem to be a quichevore
http://www.bernstein-plus-sons.com/RPDEQ.html
c) it has to be some evil conspiracy pushed on everyone by Redhat
This is why I usually have a policy of "from distribution repo only".
1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).
1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).No. Just, no.
1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).No. Just, no.It depends very much on the filesystem in use, the activity on the machine at the time of shutdown. IME, on ext2 filesystems, I'd say it's more like 1 in 50 as a worst case. On journaling filesystems, it's much closer to 0 in 50.
CentOS 7 doesn't have a package for the mtd (memory technology device) tools. So I'm having to build a kernel and I have been reminded of that great innovation of Gnu/Linux. The progress spew with no information.
Mustn't alarm anyone by reporting the actual commands, argument order and all that nasty compile and link details. We'll just say we're compiling foo.o. That should do. Trust us. We're *much* smarter than your are :-(
Of course, in a way I shouldn't be surprised. I spent a lot of time over the years fixing library link list ordering for people who didn't comprehend the implications of single pass linking.
When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
You might be surprised. I shipped some USB 2.0-based hardware at a point in time when USB 3.0 was still relatively new. It turned out that my firmware had a malformed device descriptor entry that caused Windows to BSOD immediately after the device was plugged into a USB 3.0 port. The first customer who reported it blamed Windows, not me. "Hey, just thought you'd like to know about this," was how he phrased it, rather than, "Hey, dumbass, fix this stupid thing or refund my money," which would have been entirely appropriate.
With all the quality problems associated with Windows 10, I suspect that this scenario would play out exactly the same way today. It's a shame that Linux isn't better-positioned to take a share of the desktop market away from Microsoft.The fact that you had a bad USB descriptor entry is your bug. The fact that you bug BSoD'd Windows is Microsoft's bug. IMO, their bug is worse than your bug.
To be fair, the BSOD originated from the kernel-level driver for the USB root hub in question, which didn't come from MS but from a third-party vendor like Renesas or Intel or someone like that.
I don't see how Linux would have behaved any differently in response to a critical fault at ring 0. Whether a stereotypical Linux user would have blamed the OS, the root hub vendor, or the device vendor, I'm not sure, but I do tend to think the device vendor would have received the bulk of the negative feedback.