The early Gnu programmers were traditional Unix users. Adding a feature was OK, but altering existing options, syntax and semantics was "simply not done".I think you are describing the first 2 seconds of GNU / Linux here. Linux is notoriously bad at keeping existing behaviour. If you want to have any chance of creating a Linux application which works on any distribution you have to ship it will a full set of libraries (or link statically) otherwise it simply won't work. Look at Firefox for example. It comes with a full pack of libraries. The whole concept of shared libraries is flawed from the start and it just wastes space instead of saving it.
Much of the rant was related to the surplus of system update utilities, all alike, but with different names, options, etc.
WARNING : apt does not have a stable CLI interface. Use with caution in scripts.Seriously, who is responsible for that sh*t and what is wrong with them?Programmer's remorse: a piece of software can always be improved! Some programmers just can't stop working on a piece of software.
Gnu/Linux has become just like Windows.
My biggest complaint is Octave which I used for many years. It cannot be compiled on Solaris because the autoconf crap is so screwed up.
It seems like most of the heat in this rant is properly directed at the distribution vendors, not at Linux (the kernel) or (most of) gnu utilities.
My biggest complaint is Octave which I used for many years. It cannot be compiled on Solaris because the autoconf crap is so screwed up.
Would you say Autoconf is such a mess that it would be worthwhile transitioning to something newer like CMake? :)
ix2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether 0c:c4:7a:X
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (Unknown <rxpause,txpause>)
status: active
media: Ethernet autoselect (1000baseT <full-duplex>)
borjam@nvme1:/usr/src/sys/dev/ixgbe % ifconfig -vvvvvv ix2
ix2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether 0c:c4:7a:X
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (Unknown <rxpause,txpause>)
status: active
plugged: SFP/SFP+/SFP28 1X Copper Passive (Copper pigtail)
vendor: Intel Corp PN: XDACBL3M-C SN: XXXXXXXX DATE: 2016-05-26
Class: 1X Copper Passive
Length: short distance
Tech: Passive Cable
Media: Twin Axial Pair
Speed: 100 MBytes/sec
SFF8472 DUMP (0xA0 0..127 range):
03 04 21 01 00 00 04 41 84 80 D5 06 64 00 00 00
00 00 03 00 49 6E 74 65 6C 20 43 6F 72 70 20 20
20 20 20 20 00 00 1B 21 58 44 41 43 42 4C 33 4D
2D 43 20 20 20 20 20 20 43 20 20 20 00 00 00 61
00 00 00 00 4D 37 42 30 38 39 37 30 20 20 20 20
20 20 20 20 31 36 30 35 32 36 20 20 00 00 00 42
20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
media: Ethernet autoselect (10Gbase-Twinax <full-duplex,rxpause,txpause>)
media: Ethernet autoselect (Unknown <rxpause,txpause>)
plugged: SFP/SFP+/SFP28 1X Copper Active (Copper pigtail)
vendor: BROCADE PN: 58-1000027-01 SN: CBXXXXXXXXX DATE: 2009-12-11ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000 options=e407bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether 0c:c4:7a:X
nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
media: Ethernet autoselect (10Gbase-Twinax <full-duplex,rxpause,txpause>)
status: active
plugged: SFP/SFP+/SFP28 1X Copper Passive (Copper pigtail)
vendor: Intel Corp PN: XDACBL3M-C SN: XXXXXXX DATE: 2016-05-26Where can I start to describe what's wrong with Linux?Nowhere. Everything goes smooth as long as you use a stable & well tested distribution in a PC with hardware which is supported by Linux (the latter is much less of an issue nowadays) and do a training so you know what you are doing. I've been running Linux servers at customers since around 1996 using Debian until I quit doing company networks. However before letting Linux loose on my customers I took a one day hands-on training on how to install and configure a Linux server. Even today Debian is one of the better distributions if you want to do serious work with Linux.
Haha. This is typical experience yes. And when Linux is broken, getting it fixed is nigh on impossible even if you pay DeadRat lots of money. And that doesn't even cover the crap you have to deal with on top of it with sytemd-networkd or even worse NetworkManager.Ahh yes, DeadRats. I have more amusing accounts of dealing with Linux admins struggling with trivial problems because of brain dead default settings in pre compiled packages. Apache in Prefork mode (none of the ones I know were aware that it was possible to use threaded MPMs!) and my favourite RH specific one: the most useless and rimbombant piece of documentation: a Performance Tuning Handbook that didn't mention where to change something as important as "maxusers".
Exactly the same issue is present in Debian or whatever Linux distribution you want to put as an example because the problem is the same.Where can I start to describe what's wrong with Linux?Nowhere. Everything goes smooth as long as you use a stable & well tested distribution in a PC with hardware which is supported by Linux (the latter is much less of an issue nowadays) and do a training so you know what you are doing. I've been running Linux servers at customers since around 1996 using Debian until I quit doing company networks. However before letting Linux loose on my customers I took a one day hands-on training on how to install and configure a Linux server. Even today Debian is one of the better distributions if you want to do serious work with Linux.
My conspiracy theory of the day is the churn is generally promoted by the companies that make their money selling support. If things were stable and just worked they'd not have to sell as much support, so it's in their best interest to keep things brittle, fragile and a moving target.
If you do not recognize the allusion in the subject line or know what an allusion is, *please* just skip this.
So, FreeBSD doesn't have that effect? (Yet. If a significant movement develops towards it, TPTB (MS etc) will do their best to inject the churn poison into FreeBSD too.)Not so far or not so much by a very large extent.
If you do not recognize the allusion in the subject line or know what an allusion is, *please* just skip this.
I was expecting something SJW related!
Btw, has it occurred to you that since MS is the master of churn, and considers Linux the enemy of their business model, that it's highly in Microsoft's favor if Linux dies by churn?
It seems like most of the heat in this rant is properly directed at the distribution vendors, not at Linux (the kernel) or (most of) gnu utilities.That's actually the root of all that evil. There is no such thing as a kernel without an OS. An operating system is a tightly coupled and trusted set of components which you can usually divide into kernel and userland tools, you can split userland tools into administration, development and user commands, and so on.
I am sure most Linux users will not fully understand the relevance of this example and they will dismiss it as the complaints of a fastidious BSD idiot fanboy.
Yeah. Which was the first version that didn't require a reboot because you changed the IP address?I am sure most Linux users will not fully understand the relevance of this example and they will dismiss it as the complaints of a fastidious BSD idiot fanboy.
Meanwhile, in Windows, I just plug the Cat5 cable into the jack in the back of the PC, and it just works. :popcorn:
Yeah. Which was the first version that didn't require a reboot because you changed the IP address?I am sure most Linux users will not fully understand the relevance of this example and they will dismiss it as the complaints of a fastidious BSD idiot fanboy.
Meanwhile, in Windows, I just plug the Cat5 cable into the jack in the back of the PC, and it just works. :popcorn:
The idea that a user should have to dig into the intricacies of Ethernet duplex connectivity in order to use their system is deeply pathological. Say what you will about Windows, but when I want to set up a system, I just plug the Cat5 cable into the jack in the back of the PC, and it just works. :popcorn:Yes. Of course. Tell me that when in the 90's I spent a weekend replacing the network of a whole hospital. Peecees running Windows, most not negotiating properly.
No version of Windows I have ever used, starting with Windows 95, has ever required that.
Just make. All you need is make. And a clue stick.
Now I recall another unfunny anecdote. Setting a fixed IP address on a Raspberry Pi running Raspbian. Someone decided it should run DHCP always and to use a config file name with such an intuitive name as d-h-c-p-c-d.conf. Wow. As it happened I was setting up a couple of Raspberries shortly after this came out and doing a Google search for the right way to do it was so fun.
I stuck a Pi w/ raspian inside my Generac generator so I could run "genmon". Raspian's a linux distro, right, so how hard can it be to figure out (I'm principally a Centos/RHEL/Fedora guy). All I want to do is assign the Pi a freakin static IP address. I spent HOURS attempting to figure out how to do this. I'm like WTF. Finally I just gave up and reconfigured my DHCP server so I could create a reservation for it. Okay, admittedly I do not d
To be fair, 99.5+% of RPi users do almost certainly want to use DHCP.
The problem is that you (and I) might be that less than 1 in 200 case.Yes, but still pure idiocy. There are many valid for disabling a DHCP client and using static addresses instead. Why make it harder? There is no reason. No benefit. Ah yes, someone answered a "why???" question with a "there are countless opportunities there". For blunder I would add.
To be fair, 99.5+% of RPi users do almost certainly want to use DHCP.QuoteThe problem is that you (and I) might be that less than 1 in 200 case.Yes, but still pure idiocy. There are many valid for disabling a DHCP client and using static addresses instead. Why make it harder? There is no reason. No benefit. Ah yes, someone answered a "why???" question with a "there are countless opportunities there". For blunder I would add.
I understand changes when there is a good reason. But, gratuitous? Like all that systemd crap and their stellar response to potential security issues.
https://github.com/systemd/systemd/issues/6237
If you do not recognize the allusion in the subject line or know what an allusion is, *please* just skip this. I don't really have the patience to deal with the rampant ignorance of millenials. I fully realize that it is not your fault. You have been conned by an educational system that is only interested in serving the interests of the faculty and staff. But it is a major impediment to communication.I fundamentally agree with your post, other than the cheap jab at millennials. Remember that:
Haha. This is typical experience yes. And when Linux is broken, getting it fixed is nigh on impossible even if you pay DeadRat lots of money. And that doesn't even cover the crap you have to deal with on top of it with sytemd-networkd or even worse NetworkManager.
Boomers forget how uninformed they were when they were the same age!
To be fair, 99.5+% of RPi users do almost certainly want to use DHCP.QuoteThe problem is that you (and I) might be that less than 1 in 200 case.Yes, but still pure idiocy. There are many valid for disabling a DHCP client and using static addresses instead. Why make it harder? There is no reason. No benefit. Ah yes, someone answered a "why???" question with a "there are countless opportunities there". For blunder I would add.
I understand changes when there is a good reason. But, gratuitous? Like all that systemd crap and their stellar response to potential security issues.
https://github.com/systemd/systemd/issues/6237
I built a cluster of RPi's awhile back to use for a demo for schoolchildren. I remember noodling with IP addresses and DHCP. In the end, I found it easier to leave the cluster units' DHCP clients running and run a DHCP server on a multi-homed gateway unit, following an IP assignment policy that I could control in one place. So, I get fixed IP addresses but the cluster units don't know they're getting fixed IP addresses. Even better would be if I could netboot the slave pi's from the master, but I didn't need to get that for for the demo.
Not sure why I told that anecdote, exactly.
Where can I start to describe what's wrong with Linux?Nowhere. Everything goes smooth as long as you use a stable & well tested distribution in a PC with hardware which is supported by Linux (the latter is much less of an issue nowadays) and do a training so you know what you are doing. (...)
The problem with Raspbian DHCP is that no one is paying the dhcpcd creator to be compatible with /etc/network/interfaces
Meanwhile the other big dhcp client is being created by Redhat
It's not his job to. No dhcp client is 'compatible' with 'insert arbitrary OS script here' - it is up to the distribution to integrate it.Which is a big maintenance headache, now you have to QA your shim each and every update ... because the author doesn't give a shit about breaking it. They don't wanna, I don't blame them.
Er, since when do they own the ISC?Well that too, but that now generally only runs after systemd has it's claws into things ... and thus you get to setup your static IP in /etc/systemd/network.
If you don't mind being datamined you can do far worse than use a Chromebook/Chromebox, Linux on the desktop is mainstream.
QA, fast response to problems and hardware certifications takes money and central organization. The community can develop Linux, but it takes a company to make a mainstream desktop from it.
Please don't blame linux! Blame specific linux distributions! Some think they have to move to the latest and greatest fancy stuff. But you don't have to. I'm running the same non-fancy window manager for ages.
None of which are party to early disclosure or managed properly.
If you do not recognize the allusion in the subject line or know what an allusion is, *please* just skip this. I don't really have the patience to deal with the rampant ignorance of millenials. I fully realize that it is not your fault. You have been conned by an educational system that is only interested in serving the interests of the faculty and staff. But it is a major impediment to communication.
...
Back in the days ...
...
We now live in a world...
...
I have read extensively the history of computing from the 1930's and 40's to date and was a player in it for the last 40 years. I have seen the same mistakes repeated so many times it fills me with despair.
...
I don't care *who* your are. You are *not* the smartest person in the room. Human knowledge at this date is far larger than anyone can absorb.
None of which are party to early disclosure or managed properly.
Like Devuan for example? >:D
But still better than MS with their commercial OS. Aren't you amazed how much you get for free?
PS -- this is also the reason I don't use IDEs. All the millenials think it's insane that I go without the niceties that IDEs provide (and there are some niceties, indeed) but I just got tired of learning one after another for what seemed like no reason at all. I'm going to stick with vi and make until you pry them from my cold, dead hands. (Actually, I do use vim, so I guess I'm not a purist!)
But let's got a little further back. This stinker started before all this with GNU products. What happened is someone had a userland (Stallman) and someone had a Kernel (Linus) and glued them together with poop, straw and sticky tape. GNU products are generall buggy as hell, sometimes absolutely crazily badly implemented and have so many incompatible extensions that Stallman's vendor lock in comments are nothing but raging comedic hypocrisy.
The early Gnu programmers were traditional Unix users. Adding a feature was OK, but altering existing options, syntax and semantics was "simply not done".I think you are describing the first 2 seconds of GNU / Linux here. Linux is notoriously bad at keeping existing behaviour. If you want to have any chance of creating a Linux application which works on any distribution you have to ship it will a full set of libraries (or link statically) otherwise it simply won't work. Look at Firefox for example. It comes with a full pack of libraries. The whole concept of shared libraries is flawed from the start and it just wastes space instead of saving it.
vi is handy for making quick edits but I can't imagine trying to use it to write any serious code. I guess once you know it inside and out it would be fine but sheesh, you'd just about have to be a masochist.
Now get off my lawn! ;D
Much of the rant was related to the surplus of system update utilities, all alike, but with different names, options, etc.
Every time I run apt (that's the flavor of the day for me) from a non-interactive terminal, I get the follow warning:Code: [Select]WARNING : apt does not have a stable CLI interface. Use with caution in scripts.
And I think to my self the same thing every time: "F*k, you! You folks need to pull yourselves together and iron that the f*k out because the rest of us need to get on with our lives and do not have time to babysit goddamned package managers."
Seriously, who is responsible for that sh*t and what is wrong with them?
Seriously, who is responsible for that sh*t and what is wrong with them?Programmer's remorse: a piece of software can always be improved! Some programmers just can't stop working on a piece of software.
Harry Spencer said it best : "Those who do not understand Unix are condemned to reinvent it, poorly."
It seems like most of the heat in this rant is properly directed at the distribution vendors, not at Linux (the kernel) or (most of) gnu utilities.
My biggest complaint is Octave which I used for many years. It cannot be compiled on Solaris because the autoconf crap is so screwed up.
Would you say Autoconf is such a mess that it would be worthwhile transitioning to something newer like CMake? :)
It seems like most of the heat in this rant is properly directed at the distribution vendors, not at Linux (the kernel) or (most of) gnu utilities.
I'm building and using my ATE/calibration system using GNU tools: Octave, gawk, nc, ssh. My system is CLI-based, so I know what's going on, and so it runs smoothly on lightweight hosts like rpis and old laptops. Just recently I've automated much of my testing/calibration process using makefiles (make clean; make config; make run; make postrun; make report). The GNU tools, and Linux, have been a godsend.
My focus is thermometry (resistance and optical). Not ready to post any work here yet, but getting close.
The worst thing is: Linux is being used as an example in many OS design courses. Amazingly it was written as an operational alternative to Minix. Minix was conceived as a teaching tool setting performance and functionality aside, while Linux was the opposite. And now the dirty hack has become the teaching tool of choice. To me it's like teaching Excel instead of programming and SQL instead of data structures.
I am sure most Linux users will not fully understand the relevance of this example and they will dismiss it as the complaints of a fastidious BSD idiot fanboy.
End of rant, congratulations if you didn't fall asleep!
Where can I start to describe what's wrong with Linux?Nowhere. Everything goes smooth as long as you use a stable & well tested distribution in a PC with hardware which is supported by Linux (the latter is much less of an issue nowadays) and do a training so you know what you are doing. I've been running Linux servers at customers since around 1996 using Debian until I quit doing company networks. However before letting Linux loose on my customers I took a one day hands-on training on how to install and configure a Linux server. Even today Debian is one of the better distributions if you want to do serious work with Linux.
In the kernel, in gnu utils, or in the distro? I'd wager the proportions are less than 1%, less than 4%, greater than 95%, respectively.It seems like most of the heat in this rant is properly directed at the distribution vendors, not at Linux (the kernel) or (most of) gnu utilities.No. It is gratuitous changes in syntax and semantics breaking things.
If you do not recognize the allusion in the subject line or know what an allusion is, *please* just skip this.
I was expecting something SJW related!
Social Justice Warrior (https://en.wikipedia.org/wiki/Social_justice_warrior)
Oh, please, no. Gnu make(1) works just fine if you know how to use it. KISS
[snip]
The worst thing is: Linux is being used as an example in many OS design courses. Amazingly it was written as an operational alternative to Minix. Minix was conceived as a teaching tool setting performance and functionality aside, while Linux was the opposite. And now the dirty hack has become the teaching tool of choice. To me it's like teaching Excel instead of programming and SQL instead of data structures.
I am sure most Linux users will not fully understand the relevance of this example and they will dismiss it as the complaints of a fastidious BSD idiot fanboy.
End of rant, congratulations if you didn't fall asleep!
No. It is gratuitous changes in syntax and semantics breaking things.I decided to have a look at Xen, and for whatever reason, it seemed like a good idea to do it with Centos 7...
[root@xen1 rh]# ifconfigThey removed important commands and replaced them with "ip". Bastards.
bash: ifconfig: command not found
[root@xen1 rh]# netstat -rn
bash: netstat: command not found
[root@xen1 rh]# arp -na
bash: arp: command not found
They removed important commands and replaced them with "ip". Bastards.Ah well, at least this one isn't on Redhat. It was those damn Russians.
Oh, please, no. Gnu make(1) works just fine if you know how to use it. KISS
So tell, me ... without an incredible mount of ad-hoc scripting (ie. recreating a meta-build system on the fly, as with all such endeavours, poorly) how would you allow building something of the complexity of Octave with the same amount of instructions as you can with a meta-build system? (When it works, obviously.)
This is not some POSIX only, or POSIX+X dependent application. You can't appeal to the authority of ancients on this one, these kinds of graphical cross-platform applications have no parallel in early Unix. Even a multi-architecture OS is trivial to build compared to large cross platform graphical applications. New problems, new solutions ... to say they are not necessary is putting you in the same shoes as those who are changing the ancient ways of dealing with long standing problems, ie. pretending you are the smartest person in the room. It's always tempting to do so isn't it? :)
Can't really pin that one on millennials. I'm gen x and even most of the guys my dad's age have been using IDEs for decades. You don't *have* to change to the latest one each year. I've been using the same versions of ISE and Quartus I learned back when I first started messing with FPGAs.
Can't really pin that one on millennials. I'm gen x and even most of the guys my dad's age have been using IDEs for decades. You don't *have* to change to the latest one each year. I've been using the same versions of ISE and Quartus I learned back when I first started messing with FPGAs.
That's a huge mistake in the FPGA world. Those IDEs have gotten so much better (and faster) over the past few years it's not even funny.
That and from what little I've seen, Vivado is most certainly not faster than ISE. What I've got works just fine for what I'm doing, I know how to use it and it gets the job done. How is it a mistake to stick with what I know? I'm recreating 30+ year old hardware, I don't need the latest or greatest.
My biggest complaint is Octave which I used for many years. It cannot be compiled on Solaris because the autoconf crap is so screwed up.
Would you say Autoconf is such a mess that it would be worthwhile transitioning to something newer like CMake? :)
Oh, please, no. Gnu make(1) works just fine if you know how to use it. KISS
Please come back once you had to maintain a large software system that has to be both multi-platform and support multiple toolchains (e.g. Visual C++ and gcc/clang).
[...]
When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
The fact that you had a bad USB descriptor entry is your bug. The fact that you bug BSoD'd Windows is Microsoft's bug. IMO, their bug is worse than your bug.When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
You might be surprised. I shipped some USB 2.0-based hardware at a point in time when USB 3.0 was still relatively new. It turned out that my firmware had a malformed device descriptor entry that caused Windows to BSOD immediately after the device was plugged into a USB 3.0 port. The first customer who reported it blamed Windows, not me. "Hey, just thought you'd like to know about this," was how he phrased it, rather than, "Hey, dumbass, fix this stupid thing or refund my money," which would have been entirely appropriate.
With all the quality problems associated with Windows 10, I suspect that this scenario would play out exactly the same way today. It's a shame that Linux isn't better-positioned to take a share of the desktop market away from Microsoft.
* Lack of compatibility between Linux distros - the are actually compatible fairly well but if you are worried about this, pick one that is well supported and stick with it (and software distribution issues are addressed below).The problem is, some subtle differences can be catastrophic. For a start because they seem to be the same, but.
* Systemd bashing - can we, please, finally drop this nonsense fight? Most people who complain about it here seem to do so basically only because:I have nothing against "different" as long as there is an improvement. However I don't like Christmas trees and new stuff for the sake of it.
a) it is different than what they were used to
b) it is "not unix philosophy"I think that the Unix philosophy proved some technical merit beyond any doubt long ago. So, yes, deviating from it is not good unless a benefit is shown. Is it? No. Carry on, no political issue here.
Lennart Poettering is an ass - :blah:, that's more a political than technical discussionExcept when it affects his response to security and reliability issues. Such an important element of an OS must be simple and straightforward.
c) it has to be some evil conspiracy pushed on everyone by Redhat - :palm: - it actually improves compatibility between the distros (complaint above - before every distro used to have different init setup, different init scripts, different way of handling system services and supporting that when writing and distributing software was a nightmare).Yes, why don't we ditch the Posix APIs and adopt Win32 for Unix systems? We will improve compatibility! :box: :box:
Also most people have no clue what systemd really does and why (no, it isn't just an init system and there are good reasons for it). Yes, it is different than just a bunch of scripts symlinked from /etc/rc.d but it is not that different or complicated to use and there are plenty of advantages (such as automatic supervision of your services).Complicated, overloaded -> bad.
When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.I do, actually. When did the geniuses discover that it might be a good idea to keep different versions of shared libraries and keep different directories for system and application provided shared libraries? Of course the infamous 8.3 file names made it harder.
It's a shame that Linux isn't better-positioned to take a share of the desktop market away from Microsoft.
I don't think Linux will ever be ready for general consumer use unless the developer community can put individual ambitions and vanity behind, and can agree on one "ready to use" Linux distribution. (But then, where would be the fun in that? ::))There is another serious issue. The Holy Church of Licenseology.
:) get some laughs reading these thread, thanks all for that!
In fact, now you need to watch to a 15 minute Youtube video a nerd has recorded just to know that. I cannot see the point at all in:when all needed is small number of written lines |O
- some guy spending hours recording a video
- thousands of people spending 15' watching that video
Regards.
So *please* if you work on programs which have their roots in AT&T/BSD Unix, don't break existing behavior.
when the build process fails on a large project with a page full of dependencies with a large range of supported versions you're up shit creek regardless.
These large projects need to optimize for the common case where you can automate/hide the complexity of configuration of the build process, they need something like autoconf. If it makes life harder when it does fail, well sucks to be you.
The code was so buggy, that I set things up so that if a user typed "foo" it looked in a table of user ids and program versions and if there was a version for that user ran that instead of the regular version. I had to do that because I was often fixing different bugs in the same program during the course of a single day. Once everything was tested, it became the standard version and the user specific versions went away.Problem is, you don't seem to be a quichevore ;)
The code was so buggy, that I set things up so that if a user typed "foo" it looked in a table of user ids and program versions and if there was a version for that user ran that instead of the regular version. I had to do that because I was often fixing different bugs in the same program during the course of a single day. Once everything was tested, it became the standard version and the user specific versions went away.Problem is, you don't seem to be a quichevore ;)
http://www.bernstein-plus-sons.com/RPDEQ.html (http://www.bernstein-plus-sons.com/RPDEQ.html)
c) it has to be some evil conspiracy pushed on everyone by Redhat
This is why I usually have a policy of "from distribution repo only".
1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).
It depends very much on the filesystem in use, the activity on the machine at the time of shutdown. IME, on ext2 filesystems, I'd say it's more like 1 in 50 as a worst case. On journaling filesystems, it's much closer to 0 in 50.1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).No. Just, no.
It depends very much on the filesystem in use, the activity on the machine at the time of shutdown. IME, on ext2 filesystems, I'd say it's more like 1 in 50 as a worst case. On journaling filesystems, it's much closer to 0 in 50.1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).No. Just, no.
CentOS 7 doesn't have a package for the mtd (memory technology device) tools. So I'm having to build a kernel and I have been reminded of that great innovation of Gnu/Linux. The progress spew with no information.
Mustn't alarm anyone by reporting the actual commands, argument order and all that nasty compile and link details. We'll just say we're compiling foo.o. That should do. Trust us. We're *much* smarter than your are :-(
Of course, in a way I shouldn't be surprised. I spent a lot of time over the years fixing library link list ordering for people who didn't comprehend the implications of single pass linking.
The fact that you had a bad USB descriptor entry is your bug. The fact that you bug BSoD'd Windows is Microsoft's bug. IMO, their bug is worse than your bug.When a Windows installer makes a mess of your system by installing all sorts of random crap all over the machine (drivers, background services, updater, DRM ...) and/or fails at it, nobody seems to blame Windows for it neither.
You might be surprised. I shipped some USB 2.0-based hardware at a point in time when USB 3.0 was still relatively new. It turned out that my firmware had a malformed device descriptor entry that caused Windows to BSOD immediately after the device was plugged into a USB 3.0 port. The first customer who reported it blamed Windows, not me. "Hey, just thought you'd like to know about this," was how he phrased it, rather than, "Hey, dumbass, fix this stupid thing or refund my money," which would have been entirely appropriate.
With all the quality problems associated with Windows 10, I suspect that this scenario would play out exactly the same way today. It's a shame that Linux isn't better-positioned to take a share of the desktop market away from Microsoft.
To be fair, the BSOD originated from the kernel-level driver for the USB root hub in question, which didn't come from MS but from a third-party vendor like Renesas or Intel or someone like that.
I don't see how Linux would have behaved any differently in response to a critical fault at ring 0. Whether a stereotypical Linux user would have blamed the OS, the root hub vendor, or the device vendor, I'm not sure, but I do tend to think the device vendor would have received the bulk of the negative feedback.
Linux is far more solid than Windows is these days.
Linux is far more solid than Windows is these days.
Nice joke. There was probably no program that i tried that did not crash in Ubuntu. Just like that - puff.... no error message, just vanish.
My one experience with Ubuntu has deterred me from using it again.It was probably running in RAM for people to test the software. You generally get the option what you want upon boot or installation. Volatility can be a useful tool.
Ran the test version from the USB stick, downloaded and installed components required to compile a program (including the program source code),
everything worked until the next session it forgot everything as if it was all running in RAM. source or binary for program was also gone.
Why would anyone make a setting they didn’t want to stay on the USB drive? Even something as simple as setting a Web homepage, I’d expect to be saved.
So then tried the proper install, and it wrote over a data SSD it had no business overwriting (didn’t have an OS on it), and wasn’t the drive it booted from.
Ubuntu installer failed at it’s one job, and also ruined all of the data on the 512 Gb SSD. Couldn’t see it in Windows GUI, and formatted it with Command line Diskpart.
That was harmful.
1) If you power down the PC when Linux is still running, 1 out of 5 times you will break Linux. That does not happen with other OS (at least Irix and Windows XP -> 10).In the old times Linux worshippers liked to brag about the ext2 filesystem speed. How did they achieve the miracle? By mounting it in async mode. :-DD
make V=1What I say, death by a thousand of paper cuts.
I know, it's not exactly 100% like everything which came before it, so it must be bad.
If Ken Thompson doesn't trust anything over about 10,000 lines of code I don't think there's much hope with multi-million line systems such as we use today.
Haven’t seen ext2 for about 10 years.
ext4 / xfs now. Never breaks. Apart from when your resize xfs and it doesn’t up the inode count.
Ironically I’ve lost count of the amount of fucked up NTFS and HFS+ volumes I’ve seen just in the last 5 years. NTFS usually recovers fine though. I really don’t trust HFS+
And that’s modern software :)Yeah.
I had a lot of "fun" not too long ago when I was backing up my files on an NTFS drive while the power went out. I ended up having to transfer ~1TB of data off of the backup drive onto another hard drive, reformat it and then transfer it back, all while struggling to Window's 255 character filepath limit. I don't think I'll be using NTFS again.I had an amusing experience with NTFS (my only one and just because I was called to the rescue!) back in 2005 or so.
Another typical problem is windows' remote access and admin stuff, WinRM. It's a .Net process which creeps larger and larger until it explodes and then stops responding to requests. The protocol is HTTP and the only way ansible can do it's thing is by using WinRM to upload powershell scripts and execute them.Ahhh, memories!
It's so good, Microsoft built SSH into Windows 2016 :-DD
My one experience with Ubuntu has deterred me from using it again.
Ran the test version from the USB stick, downloaded and installed components required to compile a program (including the program source code),
everything worked until the next session it forgot everything as if it was all running in RAM. source or binary for program was also gone.
Why would anyone make a setting they didn’t want to stay on the USB drive? Even something as simple as setting a Web homepage, I’d expect to be saved.
So then tried the proper install, and it wrote over a data SSD it had no business overwriting (didn’t have an OS on it), and wasn’t the drive it booted from.
Ubuntu installer failed at it’s one job, and also ruined all of the data on the 512 Gb SSD. Couldn’t see it in Windows GUI, and formatted it with Command line Diskpart.
That was harmful.
If Ken Thompson doesn't trust anything over about 10,000 lines of code I don't think there's much hope with multi-million line systems such as we use today.
He’s right. You can break the rules on this which is the point.
There's an old saying stating that a good developer makes approximately 1 bug every 800 LOC.After typing in the source code?
Citrix. Kill me.
I always wanted a SPARCbook. Still tempted to buy one if I see one. Sure it’ll run netbsd or something still.
It’s nice looking back at this stuff. Fond memories.
Oh no, not another circle jerk about Linux being unusable :palm:My biggest complaint is Octave which I used for many years. It cannot be compiled on Solaris because the autoconf crap is so screwed up.
Would you say Autoconf is such a mess that it would be worthwhile transitioning to something newer like CMake? :)
Oh, please, no. Gnu make(1) works just fine if you know how to use it. KISS
Please come back once you had to maintain a large software system that has to be both multi-platform and support multiple toolchains (e.g. Visual C++ and gcc/clang).
Common problem is user is punched in the dick wherever they turn.
There's an old saying stating that a good developer makes approximately 1 bug every 800 LOC.After typing in the source code?
After compiling?
After testing?
After delivery to customemers?
So I am reduced to using strace(1) to find out where this idiot system is looking.
So I am reduced to using strace(1) to find out where this idiot system is looking.
Or you could simply look up where modules are stored.
Have a freebee: /lib/modules/$(uname -r)
As a general response to some comments. If you personally do not *start* solving a problem with an unfamiliar program by tracing the system calls or know people who do, you can't begin to appreciate the skill level that implies.
Eventually I learned from dmesg that the modules were considered tainted.
I'd say rather well. I was paid oil industry Stanford PhD day rates despite not having gotten my doctorate. I have several very close friends in that category. So I have a very good idea of what the pay scales are. A fairly important detail if you work as a contractor on 3-6 year contracts. I did not look for work. It looked for me by name.As a general response to some comments. If you personally do not *start* solving a problem with an unfamiliar program by tracing the system calls or know people who do, you can't begin to appreciate the skill level that implies.
And how did that work out for you?
I did not choose CentOS. Xilinx did. As for the "standard resource" I am quite aware of it. But it seems a substantial number of the Gnu/Linux crowd are not. So it is never clear *if* there is an error message logged, or *where* it is logged.QuoteEventually I learned from dmesg that the modules were considered tainted.
Right, you chased your tail until you looked at a standard resource for kernel messages.
And yes, CentOS is grossly out of date - if that's not apparent from the fact that it was released nearly four and a half years ago and uses a five and a half year old kernel, I begin to understand your problem.
I'd say rather well. I was paid oil industry Stanford PhD day rates despite not having gotten my doctorate. I have several very close friends in that category. So I have a very good idea of what the pay scales are. A fairly important detail if you work as a contractor on 3-6 year contracts. I did not look for work. It looked for me by name.As a general response to some comments. If you personally do not *start* solving a problem with an unfamiliar program by tracing the system calls or know people who do, you can't begin to appreciate the skill level that implies.
And how did that work out for you?
At his point, I think all of the interesting and enlightening posts are probably done. My thanks to those who understood the issues for their comments. Some people brought up points I either did not know or had not considered. But it's getting unpleasant, so time to call a halt.
At his point, I think all of the interesting and enlightening posts are probably done. My thanks to those who understood the issues for their comments. Some people brought up points I either did not know or had not considered. But it's getting unpleasant, so time to call a halt.
The only person I know of who is actively addressing this is Andy Tanebaum and the Minix 3 project.That's not really true, Linux's process and driver isolation slowly lumbers forward. They just have a lot more baggage to move in the process. For most of Linux's existence there weren't even no execute bits in the page table and there was a 1 bit address space ID for the TLB on their main platform. Linux was designed for the architecture it was given.
I strongly disagree.
- Those innovations which came about (or which we learned about) during our formative years, we consider the best thing since sliced bread. If you are in the right age bracket, these might include the original Unix systems and structured programming (no GOTOs please).
- Those innovations which come about later, when we are more settled, we tend to consider newfangled crap. Depending on your age, these might include object-oriented programming or Gnu/Linux, for example.
(Windows connection to phones etc is just as bad)
And yes, CentOS is grossly out of date - if that's not apparent from the fact that it was released nearly four and a half years ago and uses a five and a half year old kernel, I begin to understand your problem.
This article by David Pressotto was published in 2000.
https://www.cse.iitb.ac.in/infolab/Data/utah2000.pdf (https://www.cse.iitb.ac.in/infolab/Data/utah2000.pdf)
I don´t claim any of these ideas as mine of course, but I really agree with him.
As a general response to some comments. If you personally do not *start* solving a problem with an unfamiliar program by tracing the system calls or know people who do, you can't begin to appreciate the skill level that implies. The primary reason I'm having problems with this stuff are two fold:I don't think *starting* with strace/dtrace is a remotely reasonable course of action. Start with man pages, start by reading log messages/eventvwr logs, start with a Google search. Start almost anywhere but tracing syscalls.
Comparing OOP (a software paradigm) with a particular program (Linux) is like comparing apples to siphonophores.
Rob Pike believes that the research done in "his" days was relevant, and that the more recent directions of the computer science and IT world are newfangled crap.
That's not what he suggested. He suggested systems research was done, which it pretty much is. Turns out the application of the technology, not the technology was the big thing and he was right.
As for "recent directions" and "newfangled crap" Rob Pike (and Ken Thompson) are currently looking after Go at Google...
It will be interesting to see if the quality of RH improves now that IBM owns it.
I read it when it was published, but the main issue was, systems software research had been rendered irrelevant not because it was a “complete” field, but because the market was happy with old and even crappy solutions and nobody wanted to bother adopting anything new.This article by David Pressotto was published in 2000.
https://www.cse.iitb.ac.in/infolab/Data/utah2000.pdf (https://www.cse.iitb.ac.in/infolab/Data/utah2000.pdf)
I don´t claim any of these ideas as mine of course, but I really agree with him.
I would argue that this article is in fact another example for my (yes, overly generalized and simplified) statements:PresottoRob Pike believes that the research done in "his" days was relevant, and that the more recent directions of the computer science and IT world are newfangled crap. Designing new operating systems is what the world needs in his view, that whole usability stuff is irrelevant.
Edit: Changed the attribution. I hope we are talking about the same article? The one you linked to is not by D.P.Yes, my fault. Both were members of the same research group at Bell Labs and both coauthored several papers about the Plan 9 and Inferno operating systems.
Sorry, my answer was poorly worded. Anyway, Linux is not an innovation. At least in my close environment, where people favorsComparing OOP (a software paradigm) with a particular program (Linux) is like comparing apples to siphonophores.
Huh? I used them as two completely independent examples of older vs. more recent innovations in computer science. Unix/Linux on one hand, and structured/object oriented programming on the other hand. Of course there is no point comparing Linux vs. OOP, and I didn't compare them.
[
I read it when it was published, but the main issue was, systems software research had been rendered irrelevant not because it was a “complete” field, but because the market was happy with old and even crappy solutions and nobody wanted to bother adopting anything new.QuoteEdit: Changed the attribution. I hope we are talking about the same article? The one you linked to is not by D.P.Yes, my fault. Both were members of the same research group at Bell Labs and both coauthored several papers about the Plan 9 and Inferno operating systems.
As for "recent directions" and "newfangled crap" Rob Pike (and Ken Thompson) are currently looking after Go at Google...
The real problem is that the amount of code that needs to be usableThis. The inertia of the existing codebase is absolutely gargantuan.
debugger that will let you do a "print cos(a)*tan(b)". As I am a scientific and systems programmer and do *not* work on GUIs, that ability is *very* important.That does also work in gdb (verified on gdb-7.11.1); you can print expressions in the source language. You can also extend gdb with Python functions. For example, (gdb.lookup_symbol('a'))[0].value(gdb.selected_frame()) will evaluate to the Pythonic value of variable a in the current frame.
It never ceases to amaze me how BSD license proponents say that the GPL license is keeping Linux from being more widely adopted. The statistics just do not support that argument. Theory is always nice and beautiful, and opinions interesting, but let's face it: reality overrules wishful thinking.The real problem is that the amount of code that needs to be usableThis. The inertia of the existing codebase is absolutely gargantuan.debugger that will let you do a "print cos(a)*tan(b)". As I am a scientific and systems programmer and do *not* work on GUIs, that ability is *very* important.That does also work in gdb (verified on gdb-7.11.1); you can print expressions in the source language. You can also extend gdb with Python functions. For example, (gdb.lookup_symbol('a'))[0].value(gdb.selected_frame()) will evaluate to the Pythonic value of variable a in the current frame.
No, I'm not saying gdb is particularly good debugger -- I don't much like it myself, and prefer a modular/unit verification approach rather than debugging whenever I can --, but I do similar scientific/systems work (mostly in C), and have found this useful on a couple of occasions.
Did you try my example or a different expression?The exact example; I verified it with a tiny test C program with a and b as variables. If the binary defines say double f(double a, double b), you can do print f(1.0, 2.0) and in general, use f() in the expressions.
And multidimensional arrays in C are awful.Fully agreed. They're horribly implemented in existing libraries (BLAS/LAPACK/ATLAS, GSL) also.
And it was pretty good code compared to the stuff research scientists write.Fortran95 and later is a pretty good tool for research scientists, because straightforward code generates pretty efficient binaries, and it is easy to write somewhat readable code. Other than that, the code research scientists have written that I've worked on, is pretty stress-inducing, if you care about robustness, efficiency, and maintainability.
The hard part of numerical codes is the indexing in many dimensions of matrices which in seismic processing is pretty much everything.I mostly deal with two-dimensional matrices, with submatrices or vectors as views into them. (I've shown some data structures elsewhere that show how to make "views" first-class citizens, and still allow efficient computation on current CPUs; boils down to using (row*rowstride + col*colstride) for addressing, and a separate reference-counted data structure to own the actual numerical data each matrix refers to.) And also with huge distributed sets of 3D particle data, which requires all sorts of ordering tricks to have any hope of cache locality; the datasets are so large that without cache locality, memory bandwidth is stricter bottleneck than the interparticle computations (various potential models).
It never ceases to amaze me how BSD license proponents say that the GPL license is keeping Linux from being more widely adopted. The statistics just do not support that argument. Theory is always nice and beautiful, and opinions interesting, but let's face it: reality overrules wishful thinking.I never said that. But the obsession with licenses has prevented the Linux world from adopting nice software and often they have poorly reinvented stuff because of that.
There is so much wrong or out of date crap on the web that it's approaching having negative value.
It had been up for 467 days.
brad@bkcuk001:~$ uname -a ; uptime
Linux bkcuk001 3.16-0.bpo.3-amd64 #1 SMP Debian 3.16.5-1~bpo70+1 (2014-11-02) x86_64 GNU/Linux
02:16:32 up 1469 days, 46 min, 1 user, load average: 0.00, 0.01, 0.05
The funny thing is, it became something even religious.You ascribe it to religion, because you dislike it for personal reasons. That is intellectually dishonest.
I am so sick of running multiple different Linux instances because almost nothing is actually portable.Unfortunately, at the same time, that is exactly why Linux is so popular on servers and scientific computing: because it can be customized to suit the task at hand, and not the other way around. It's just that it is also abused, in a way: choices are made, but never documented.
Preposterous. Where did I say that I dislike the GPL license?The funny thing is, it became something even religious.You ascribe it to religion, because you dislike it for personal reasons. That is intellectually dishonest.
Borjam is correct though. Most of the maintainers I’ve spoken to over the years picked GPL because it was the one they had heard about. Not because of explicit ideology. If that’s not a faith argument I don’t know what is.We must run in different circles, because that is completely different to what the people I've talked about this have said. Cultural differences?
The reasons the closed source derivatives exist is because the maintainers of the actual projects usually do a piss poor job of maintaining them.Bullshit. Closed source derivatives exist if and only if someone believes they can make money off of them. (There is nothing wrong with that, in my opinion; just let's be honest here.)
[...] BSD for me. Isn’t that really what we want?Don't change the subject! It is a completely different thing to discuss what licenses one wants to use for their own projects, than to discuss what licenses others should use, and why.
That's how your text reads to me, that's all. If you want, I can reword myself:Preposterous. Where did I say that I dislike the GPL license?The funny thing is, it became something even religious.You ascribe it to religion, because you dislike it for personal reasons. That is intellectually dishonest.
However, should all software be GPL licensed? Not at all.Why are you telling that to me? I've already stated I use several licenses myself, so obviously I don't think all software should be GPL licensed.
I think no software should be GPL licensed as it removes practical freedoms. Call me religious if you want everyone :)No, but idealistic, maybe. You think everything would be better if licensed under BSD or similar license, but you have no basis for that; only hope. (If you said it was a truth revealed to you in a vision or something like that, then I'd call you a religious nutter.)
I think no software should be GPL licensed as it removes practical freedoms. Call me religious if you want everyone :)
Must be, but I swear under penalty of perjury and a one year sentence of using OS/2 that it's an accurate description of what I've seen. ;)Borjam is correct though. Most of the maintainers I’ve spoken to over the years picked GPL because it was the one they had heard about. Not because of explicit ideology. If that’s not a faith argument I don’t know what is.We must run in different circles, because that is completely different to what the people I've talked about this have said. Cultural differences?
Preposterous generalization!!That's how your text reads to me, that's all. If you want, I can reword myself:Preposterous. Where did I say that I dislike the GPL license?The funny thing is, it became something even religious.You ascribe it to religion, because you dislike it for personal reasons. That is intellectually dishonest.
"You seem to find it distasteful and anti-intellectual when people choose the GPL license, because you ascribe the choice to religion."
There is nothing religious about the license choice. Even if people were to simply pick the license they've heard most about, that would be lazy and conformist, perhaps; nothing at all to do with religion. And you know that; yet you use the term as a denigrating label. That is, as far as I understand, the very definition of intellectual dishonesty. Low manipulation using terms loaded with emotional connotations.Alright, if you dislike the religious word, it was irrational. Doubly irrational because the choice of an open source license is a bit irrelevant if you are only intending to use something. If you want to contribute to the project it can be an entirely different matter of course.
Sorry, didn't mean to imply that you say it.However, should all software be GPL licensed? Not at all.Why are you telling that to me? I've already stated I use several licenses myself, so obviously I don't think all software should be GPL licensed.
BSD license is an excellent choice for example for low-level interface libraries, language standard libraries, device drivers, and so on, where the initial adoption is more important than the risk of having incompatible closed-source derivatives.So we're in the same boat here :)
Being open source at all is not always an option, either. Sometimes selling licenses is the only way to fund the development and support. It is a perfectly valid business model, nothing wrong in it.
If the Linux ecosystem used BSD instead of GPL, it would not be where it is right now. Claiming otherwise is ignoring FreeBSD, OpenBSD, and the other BSD variants, and their project history; wishful thinking not based on real world experiences. That is why I find this kind of discussion so hilarious and annoying at the same time.I am not sure the license is so relevant. If I am not wrong Linux was released at an especially critical time: when BSD was hampered by legal disputes and, at the same time computers powerful enough to run a Unix like operating system comfortably were becoming mainstream.
Check out LLVM licensing and who built it and tell me we wouldn’t have had open source stuff...Keep putting words in my mouth, why don't you?
Most companies positively want to give away stuff to the community other than their core business.No, they want to make as much money they can (or risk shareholder lawsuits); contributing to open source projects just happens to be a net positive for several reasons.
Then look at redhat who just hired all the maintainers this circumventing the whole license thing.Well, it's not like the developers it did hire are from the capable end of the spectrum, looking at their contribution history.
Humans are mostly not shitty.You just haven't met enough of them. Almost all humans are stupid, shortsighted, selfish, and definitely shitty. Some individuals occasionally surprise by doing something better, often by accident. But, because we ourselves are just as shitty as everyone else, we keep the bar low, so we don't need to admit it out loud.
The GPL prevents the possibility that somebody takes some software, adds an interesting and nice feauture to it and starts to ask money for it.
The GPL prevents the possibility that somebody takes some software, adds an interesting and nice feauture to it and starts to ask money for it.
I don't think you quite understand the GPL. It in no way prevents anyone doing as you suggest. What it does do is *require* that the source code (or an offer for the source code) is offered to the person receiving the binaries.
I understand the GPL quite well, thank you very much.
Check out LLVM licensing and who built it and tell me we wouldn’t have had open source stuff...
Must be, but I swear under penalty of perjury and a one year sentence of using OS/2 that it's an accurate description of what I've seen. ;)There are worse fates...
It was you who said "it became something even religious". Unless you meant your own statement was preposterous generalization, I don't see what you are objecting to."You seem to find it distasteful and anti-intellectual when people choose the GPL license, because you ascribe the choice to religion."Preposterous generalization!!
I am talking about a user deciding wether to use a piece of software depending on its open source license and regardless of its merits. These people I am talking about (and in the late 90's it was quite frequent or I am a magnet for weirdos!) would choose Internet Information Server over NGINX if the former had a GPL license and the latter BSD or any other open source license.At that time -- up to mid nineties -- everything BSD was overshadowed by the USL vs BSDi (https://en.wikipedia.org/wiki/USL_v._BSDi) lawsuit. In the late nineties, the exact status of the BSD sources was still debated, because the lawsuit was settled out of court, and the implications were unclear to those outside Novell and Berkeley University.
A user can choose a software package based on different criteria, one of them the license.Funny thing is, to a non-developer end user, all the free software/open source licenses provide the same freedoms -- which aside from distribution (of the original software or its derivative), boils down to "do whatever the heck you want with it".
Doubly irrational because the choice of an open source license is a bit irrelevant if you are only intending to use something.That I can fully agree with.
I am not sure the license is so relevant. If I am not wrong Linux was released at an especially critical time: when BSD was hampered by legal disputes and, at the same time computers powerful enough to run a Unix like operating system comfortably were becoming mainstream.I think the license was very relevant for exactly that same reason. The GNU Project had very clear goals, and at that time, one could easily view them as "protecting" against Embrace-Extend-Extinguish (https://en.wikipedia.org/wiki/Embrace,_extend,_and_extinguish) strategies (of creating incompatible closed-source derivatives, in order to exclude competitive solutions). Although Linus Torvalds didn't think much about the license when he chose one for the Linux kernel, using the GNU tools to compile and build it made the choice easy, I understand.
So many times an update happens to x, y or z that breaks a program because it needs flavor 123 of package x and cannot use a different one.So I was using KDE, version 3 I think. I wanted to install a program to sync my Palm Pilot. It started installing, and it seemed like it was taking forever! When I went to its window, I saw that it was installing the PREVIOUS version of KDE over my current one! As a dependency. ARGH!
I like the idea of 'portable' applications. Save em to a memory stick and done. Everything resides in 1 place.
oh goodie.. another kde/gnome , vi/emcs war ...
IF all the effort spent on the plethora of these loonix flavours , color schemes, convoluted text editors and gui managers was spent on writing a single, unfragmented operating system that fixes all the microsoft flaws, and could run dos/windows binaries the world would have been far better off. We'd have a unified OS that could run anything from commercial to free software , unmodified, with perfect security.
Now we have 354 different flavours of babbling baboon , pustulent penguin or Flatulent flipper (or whatever stupid name they give the next rev) to deal with. We have 20 different installers and other crap.
Why the hell do we even need installers ? software installation should be as simple as : create a folder , drop in the application package (which should be 1 physical file) and done.
The 'package' should be a virtual file system in itself containing everything the application needs. no shared files. settings , icons and executable are all contained in the package file. a descriptor file tells you what is the icon , what is the startup file. .
i can move this single file wherever i want it to reside. if i switch computer hardware : simply move the single file to the new machine and everything travels with it. Settings and all. How easy would that be ?
Programs would run in virtual containers partitioned form each other. They only see their 'package' where they can read and write, and a 'data' drive. users can grant permissions to packages to connect to data shares.
and none of that : this program requires x,y,z to be installed. if you need x,y,z it must come in the 'package'. So many times an update happens to x, y or z that breaks a program because it needs flavor 123 of package x and cannot use a different one.
I like the idea of 'portable' applications. Save em to a memory stick and done. Everything resides in 1 place.
disc space is cheap.
Yes. Kernel modules are really easy to compile and install (and write!). Literally a 4 line cut and paste makefile.
I repeat what I've said earlier probably incoherently that the Linux kernel is a nice bit of tech. Unfortunately it's a figurehead on a ship of fools on an ocean of diarrhea. It's the userland that sucks.
I like the idea of 'portable' applications. Save em to a memory stick and done. Everything resides in 1 place.
There's a wikipedia page (https://en.wikipedia.org/wiki/Portable_application_creators) for the concept, it's not a very popular concept though ... at least not as far as rolling the user data in with the application. Containerization of just the applications is becoming standard though, with Microsoft UWP and Ubuntu Snaps.
oh goodie.. another kde/gnome , vi/emcs war ...
IF all the effort spent on the plethora of these loonix flavours , color schemes, convoluted text editors and gui managers was spent on writing a single, unfragmented operating system that fixes all the microsoft flaws, and could run dos/windows binaries the world would have been far better off. We'd have a unified OS that could run anything from commercial to free software , unmodified, with perfect security.
Now we have 354 different flavours of babbling baboon , pustulent penguin or Flatulent flipper (or whatever stupid name they give the next rev) to deal with. We have 20 different installers and other crap.
Why the hell do we even need installers ? software installation should be as simple as : create a folder , drop in the application package (which should be 1 physical file) and done.
The 'package' should be a virtual file system in itself containing everything the application needs. no shared files. settings , icons and executable are all contained in the package file. a descriptor file tells you what is the icon , what is the startup file. .
i can move this single file wherever i want it to reside. if i switch computer hardware : simply move the single file to the new machine and everything travels with it. Settings and all. How easy would that be ?
Programs would run in virtual containers partitioned form each other. They only see their 'package' where they can read and write, and a 'data' drive. users can grant permissions to packages to connect to data shares.
and none of that : this program requires x,y,z to be installed. if you need x,y,z it must come in the 'package'. So many times an update happens to x, y or z that breaks a program because it needs flavor 123 of package x and cannot use a different one.
I like the idea of 'portable' applications. Save em to a memory stick and done. Everything resides in 1 place.
disc space is cheap.
Sounds like you want MacOS X! Install? Drag to Applications!
(Which is basically Mach + FreeBSD + Nextstep) :-DD
Well, you can move apps onto a disk on a Mac, fundamentally. Some applications don’t like this at all, but most are just fine with it. However, they do normally look for preferences, etc in the user’s home directory, not on the external disk.but you can't move them OFF ...
There’s literally no difference in this regard between Mac OS X and classic Mac OS: both have a Preferences folder, and in both, an application on an external disk still looks for the preferences file in the Preferences folder. (It’s been a long, LONG time since Mac applications kept their preference files in the same folder as the application file!!!) Same with dynamic libraries and other assets in the Application Support folders. (Many assets will also work if simply located in the same folder as the application file. This is a way of e.g. providing a library without having to put it in the System folder.)
Some applications are smart, and will launch with the preferences file in an arbitrary location by dragging the preferences file onto the application icon.
but you can't move them OFF ...That is largely a non problem with Mac OS X as long as application developers respect some pretty basic guidelines. If preferences, caches, program data, etc are in predictable locations it's quite easy to migrate everything to a new computer. I have done it several times almost without hiccups (except having to re enter license data
My idea is to be able to easily move software and settings to a new platform. if i upgrade my computer : move the files and done.
Better yet : install these files on a networked drive. that way my hardware is irrelevant. No matter from where i work : the software is accessible.
Many assets will also work if simply located in the same folder as the application file. This is a way of e.g. providing a library without having to put it in the System folder.There is nothing stopping one from writing and compiling ones applications to behave that way even now. All you need is a small launcher (wrapper script), that tells the dynamic linker about it.
but you can't move them OFF ...Huh? Where'd you get that idea? I don't say this to be rude or condescending in any way, but I think you're basing all your comments on very incomplete knowledge of what's possible!
My idea is to be able to easily move software and settings to a new platform. if i upgrade my computer : move the files and done.
Better yet : install these files on a networked drive. that way my hardware is irrelevant. No matter from where i work : the software is accessible.This is also doable, if the application is well-behaved. Most apps simply do not care where they are located. It's not uncommon in companies, for example, to have apps on a file server, even without network user accounts (the app simply uses the user's local preferences folder). And of course you can do full managed networks with roaming profiles and everything.
I was talking about now! :P A launcher script is only needed to force an app that you don't compile yourself to use different libraries than the default ones. (For example, to use the old AirPort Utility on newer versions of Mac OS X.)Many assets will also work if simply located in the same folder as the application file. This is a way of e.g. providing a library without having to put it in the System folder.There is nothing stopping one from writing and compiling ones applications to behave that way even now. All you need is a small launcher (wrapper script), that tells the dynamic linker about it.
I blame the users. They are completely satisfied working with crappy tools that crash or occasionally corrupt their data.Well, users only tolerate it because they've been conditioned (let's face it, primarily by the Windows world) to expect computers to be unreliable, sucky things. (As a long-time Mac user, I am still shocked by the stuff that Windows users will put up with, like the expectation of needing to wipe their system every once in a while, with ensuing data loss.) And for this, the blame falls on shitty developers.
Whenever I've built services or applications that I could trust, I've had to listen to an endless stream of "It doesn't need to be perfect; it just needs to look good" from cow-orkers, managers, and clients alike. Silly bugger...
In late nineties, I maintained a couple of classrooms full of MacOS 7.5 machines, with Adobe Photoshop, Illustrator, and PageMaker, Macromedia Freehand, Microsoft Word, and so on. As soon as I got the department to switch to bulk licensing, I switched the maintenance from individual machines to cloning, with the master on a CD-R. Didn't even need any cloning software, because of the folder-based approach MacOS used: just boot from the CD-R, clear the hard drive, copy the files and folders to the hard drive, and reboot (pressing a key to rebuild the desktop database).Yup. I've supported classrooms and companies, and indeed, I did the same thing on Mac OS 9 and earlier. (On Mac OS X, I instead made disk images stored on a bootable FireWire or USB drive, or a bootable external drive and disk images on a server. Then just used Disk Utility to restore the image, which is even faster than file copying.)
Saved a crapton of time, and reduced downtimes to just minutes (in case of a machine getting b0rked during a lesson).
Pity the department head (who refused to use a computer themselves, having a personal secretary print and type their emails for them) didn't trust me enough to give me a budget for consumables: every laser printer ink cassette, keyboard, and mouse that could not be refurbished, I had to obtain permission to buy a replacement, separately, in writing. My interactions with administrative types only went downhill from there, and is the reason why I burned out before I turned thirty.That sucks, and I totally understand how that burns you out. People need to feel valued and trusted.
Me too; I meant that an application on any OS can be designed to do that. On some OSes, the launcher script is needed; for example, on Linux, to set certain environment variables (LD_LIBRARY_PATH, specifically).I was talking about now! :P A launcher script is only needed to force an app that you don't compile yourself to use different libraries than the default ones. (For example, to use the old AirPort Utility on newer versions of Mac OS X.)Many assets will also work if simply located in the same folder as the application file. This is a way of e.g. providing a library without having to put it in the System folder.There is nothing stopping one from writing and compiling ones applications to behave that way even now. All you need is a small launcher (wrapper script), that tells the dynamic linker about it.
People need to feel valued and trusted.It is more complicated than that. Supervisors/bosses expressing mistrust is damaging. Denying any opportunity to show how the changes done affect users, and why the systems work so well that users forget they exist, is just petty. I would say that being trusted and valued is definitely a good thing, but their absense is quite tolerable too; it is the unfounded expressions otherwise without any recourse for correcting the misconceptions, that damage ones psyche.
Nope user data resides elsewhere.How do you produce new applications in such an environment ("No peeking in other files")?
here is my concept. A harddisk has 3 folders
- OS
- USER
- APP
When you install an application a single file is saved to the APP folder. Everything an APP needs is contained in that file. ( think of it as a ZIP file : it contains an internal file system with all the subfiles it requires. )
In USER there is also an APP folder . That contains the <application>.SETTINGS file. The APP can only write to its own SETTINGS file (the OS governs that. no stepping out of bounds. APPS can only write to their own .SETTINGS file. The APP package contains a DEFAULT.SETTINGS. when a user lauches the application for the first time that one is saved to the users space ( again the OS does that, not under control of applications)
so
- USERS\ME\APP\excel.settings
- USERS\MYWIFE\APP\excel.settings
- APP\EXCEL.APP <- this contains everything required to run excel , including a default.settings file.
The OS and APP folders are READ only for applications. Applications can only read their own .APP file. No peeking in other files or in the OS folder.
Applications must reqister a file extention during install. They can only WRITE to their registered file extensions. they can read any other data file in \users , so they can always import data from other applications, but they can only muck up their OWN data files.
here is my concept. A harddisk has 3 foldersWhat you've described is a modern sandboxed app environment. That's nearly exactly how modern apps work on, for example, iOS. Sandboxed apps on macOS work much the same way.
- OS
- USER
- APP
When you install an application a single file is saved to the APP folder. Everything an APP needs is contained in that file. ( think of it as a ZIP file : it contains an internal file system with all the subfiles it requires. )
In USER there is also an APP folder . That contains the <application>.SETTINGS file. The APP can only write to its own SETTINGS file (the OS governs that. no stepping out of bounds. APPS can only write to their own .SETTINGS file. The APP package contains a DEFAULT.SETTINGS. when a user lauches the application for the first time that one is saved to the users space ( again the OS does that, not under control of applications)
so
- USERS\ME\APP\excel.settings
- USERS\MYWIFE\APP\excel.settings
- APP\EXCEL.APP <- this contains everything required to run excel , including a default.settings file.
The OS and APP folders are READ only for applications. Applications can only read their own .APP file. No peeking in other files or in the OS folder.
Applications must reqister a file extention during install. They can only WRITE to their registered file extensions. they can read any other data file in \users , so they can always import data from other applications, but they can only muck up their OWN data files.
If i need to move to different hardware : i take the APP file and fling it on the other machine. when i first launch it the OS will create a new .SETTINGS file , if i copied over the .SETTINGS file it will use that one.
The OS contains functions to safely move .APP and .SETTINGS file on and off a machine.
How neat would that be. No more viruses , no more runaway programs that overwrite their own , or other programs files. No more data snooping ,They simply can't programs only see their own files contained in their .APP file and that file is read only. They can only write to their own .SETTINGS files and write to approved file extensions.
How do you produce new applications in such an environment ("No peeking in other files")?
How do you produce new applications in such an environment ("No peeking in other files")?
To a first approximation, you don't. You can't for iOS on an iPhone. You need the "messier" environment of a, errm, uh, real computer. For iOS, you'll need to run xcode, which means you'll need a Mac. (I understand there are ways to get around this to use Windows, but I'd be surprised if you could do it from say, an iPad).
I believe this is generally possible for Android using something like AIDE, but I do not know how practical it is. I can't see writing a lot of Java without a proper keyboard, but of course you can always plug one int.
You guys are clearly talking about graphical user interfaces only. When I do real work, I usually use command-line commands to pre/postprocess huge data files, and often chain commands. What you are talking about, does not suit my workflow at all.Pointing at me, you? Tou dare? Eh? Eh? ;)
However, when I browse the web, or download and open a document, the suggested restrictions would be very useful. They are not even that difficult to implement -- except for the sheer number of libraries those applications rely on, and their need to be able to read from and write to a large number of files (though mostly in personal preferences) and sockets.Of course there are files that must be accessible for any application like system libraries, etc. On the other hand, on Plan 9 you could run an application without any network access if needed. Just prune its namespace and voila! No network.
In fact, Linux already provides the necessary OS/kernel level support for this, seccomp() (http://man7.org/linux/man-pages/man2/seccomp.2.html). Simply put, a library can install filters that restrict which syscalls the kernel will allow to run. It is also possible for the widget which the human user uses to choose a file, to be a completely separate process, which provides the target file as a descriptor/file handle to the application. My point is, this model is already possible in Linux and OpenBSD at least. It is not an OS problem; they can already do this.It is an OS problem because doing that is incredibly complicated. That´s why OS architecture matters, not just facilities. That statement is comparable to saying "why do you program in C++ when you can do anything in assembler?" ;)
The problem is, nobody is willing to put up the resources and requirements for application and library developers to do this. The pool of developers paranoid enough (to not trust even their own code to behave, necessary for this kind of development) is also pretty small, most developers believing that error checking and security is something that can be added on top later on if there is additional money for it.So again it's the task of the OS architects to provide a proper security architecture, something a bit beyond what was good in 1970.
So, that discussion is really unfair/off-topic here, considering the thread title, and that GNU/Linux is one of the OSes that would allow that approach right now, with the necessary kernel-level support to make it robust.Indeed it is, I am not criticizing Linux for trying to be another Unix. However, the security issues somewhat came up.
Funnily enough, something similar to this already exists for Linux, developed by NSA: SELinux (https://en.wikipedia.org/wiki/SELinux). It is obviously more oriented towards services than applications, but it does assign "security labels" for each service (application) and each file and directory, and tightly controls access across "security labels". Thus far, nobody has bothered to construct a working policy for a desktop environment, that's all.However it's not so easy. I have used the MAC subsystem on FreeBSD (more or less same thing) and I had some funny and intended consequences, such as processes that despite escalating to root would still be harmless.
I think the invention of C/Unix were far more harmful to IT than beneficial. Many people celebrate both, but should we really?
simple. you have a development tool.Nope user data resides elsewhere.How do you produce new applications in such an environment ("No peeking in other files")?
here is my concept. A harddisk has 3 folders
- OS
- USER
- APP
When you install an application a single file is saved to the APP folder. Everything an APP needs is contained in that file. ( think of it as a ZIP file : it contains an internal file system with all the subfiles it requires. )
In USER there is also an APP folder . That contains the <application>.SETTINGS file. The APP can only write to its own SETTINGS file (the OS governs that. no stepping out of bounds. APPS can only write to their own .SETTINGS file. The APP package contains a DEFAULT.SETTINGS. when a user lauches the application for the first time that one is saved to the users space ( again the OS does that, not under control of applications)
so
- USERS\ME\APP\excel.settings
- USERS\MYWIFE\APP\excel.settings
- APP\EXCEL.APP <- this contains everything required to run excel , including a default.settings file.
The OS and APP folders are READ only for applications. Applications can only read their own .APP file. No peeking in other files or in the OS folder.
Applications must reqister a file extention during install. They can only WRITE to their registered file extensions. they can read any other data file in \users , so they can always import data from other applications, but they can only muck up their OWN data files.
It is an OS problem because doing that is incredibly complicated.I disagree. You don't need to replace the entire OS, because all that needs changing is the desktop environment.
I think the invention of C/Unix were far more harmful to IT than beneficial.You think that way, because you have no idea where C and Unix are really used. The amount of research alone done on Unix or Unix-like systems is staggering. Since 2017, all 500 of world's most powerful supercomputers -- those used to do the best weather forecasts, for example -- run Linux. Before that, they ran some variety of Unix. Any others, including Windows, have been very short-term blips on the list.
I like to think the world would be different if they delivered ARX.Me too.
What if Microsoft had never achieved a near-monopoly on the desktop computer arena, and we had several competing OSes there?
Since 2017, all 500 of world's most powerful supercomputers -- those used to do the best weather forecasts, for example -- run Linux.It's open source and thus an easy housekeeping OS to fit all the bespoke code into, also cheap to find developers for. That would have been likely true for some other open source OS.
Almost all programming languages today use the standard C library.Which makes for a ton of fun when there was a buffer overflow in it.
Typically, their own standard libraries are written in C, or sometimes C++.Yes it metastasized to the point that basically everything is broken.
Do you understand that if there was no C, we'd have Fortran, Pascal, some varieties of BASIC, Forth, some Lisp varieties?And Smalltalk and whatever else would have evolved.
Probably get some groans here but I rather liked pascal. It was mega explicit.
Probably get some groans here but I rather liked pascal. It was mega explicit.
And that's why the security situation is catastrophic right now.It is an OS problem because doing that is incredibly complicated.I disagree. You don't need to replace the entire OS, because all that needs changing is the desktop environment.
The OS architecture does not matter, because the low-level libraries (the standard C library, or the standard C++ library, various libraries the applications need, and the DE widget toolkit) are the ones that communicate with the OS kernel and the desktop environment.
Probably get some groans here but I rather liked pascal. It was mega explicit.Same here, I still think it's a great first programming language. Jumping right into object oriented programming without previous experience doesn't make sense.
I don't agree, except of course the Microsoft monopoly did it's "best" to push their crap down their throats. They even worked aggressively to break Internet standards creating their virtual walled garden.What if Microsoft had never achieved a near-monopoly on the desktop computer arena, and we had several competing OSes there?
I think that would be a nightmare. We already have PC vs Mac vs Linux. Anyone* who buys one box expects any software to run on it.
* I was going to say "Anyone (non-technical)" but it's the linux users that seem to be the most vocal, demanding linux portsAt the same time Linux developers commit the same sins. Porting software developed on Linux by programmers with Linux mindsets to other Unix OSs is a nightmare.
Probably get some groans here but I rather liked pascal. It was mega explicit.loved me pascal. First year of programming in high school was all about algorythms at the high level (the thinking) and turbo pascal :)
Probably get some groans here but I rather liked pascal. It was mega explicit.loved me pascal. First year of programming in high school was all about algorythms at the high level (the thinking) and turbo pascal :)
Give me C any day but i liked how pascal taught me to be rigorous
You forget: before Linux, they were all Unix variants.Since 2017, all 500 of world's most powerful supercomputers -- those used to do the best weather forecasts, for example -- run Linux.It's open source and thus an easy housekeeping OS to fit all the bespoke code into, also cheap to find developers for. That would have been likely true for some other open source OS.
Is it? The security situation on computers I use is pretty good, actually.And that's why the security situation is catastrophic right now.It is an OS problem because doing that is incredibly complicated.I disagree. You don't need to replace the entire OS, because all that needs changing is the desktop environment.
The OS architecture does not matter, because the low-level libraries (the standard C library, or the standard C++ library, various libraries the applications need, and the DE widget toolkit) are the ones that communicate with the OS kernel and the desktop environment.
At the same time Linux developers commit the same sins.Completely agreed. I just think it is because humans are crappy, and not because Linux is harmful.
Someone wrote an e-commerce site in Perl back when it was trendy.I can't look at Perl code anymore; I have PTSD from trying to maintain and fix a certain project my users used just before the turn of the century. I still get flashbacks and all, thinking about it. (I was young, and didn't know when to drop a copy-paste spaghetti turd in the bin, rather than trying to fix it.)
Lock me in a room with TP7 please with no internet and never see another software job again.Turbo Pascal 5 was the first proper compiler I ever had. It was very nice. (Although my old binaries did get bitten by the timing bug: running on fast machines would always lead to a division-by-zero crash, because a library timing loop ran too fast.) Nowadays, I get most done using gcc or gfortran.
It is not rational to declare oranges harmful just because they aren't up to your imaginary superfruit that you think might have evolved instead of oranges. It is pure speculation, and classifying things harmful because of speculation alone, is nonsense.Of course not. But Linux has gone backwards related to Unix in several ways. And it's become a showcase of poor practices. That makes it harmful. Of course it's a Unix clone, I won't claim that it's bad because it's still based on the Unix security model.
Is it? The security situation on computers I use is pretty good, actually.Awww, sorry. You are right, buffer overflows were a thing of November 1988.
If you look at e.g. appliances that have had security issues, it is almost always because the manufacturer configured a service with a fixed "secret" username and password. Nothing to do with C, just horribly bad software engineering in general.
Still Linuxisms have harmed the portability of most software targetted for Unix systems. Now it's targetted for Linux. That'sAt the same time Linux developers commit the same sins.Completely agreed. I just think it is because humans are crappy, and not because Linux is harmful.
You forget: before Linux, they were all Unix variants.Which was the closest thing to an useful open source OS before Linux and the BSDs.
If you look at e.g. appliances that have had security issues, it is almost always because the manufacturer configured a service with a fixed "secret" username and password. Nothing to do with C, just horribly bad software engineering in general.Mostly a mix between logins and web front ends not escaping their inputs. Low hanging fruit first, also remote exploits have commercial value destroyed by botnets. You shouldn't expect the more sophisticated exploits which can target linux servers to show up on IOT botnets often, botnets don't generate enough income to waste such an exploit on. The Mikrotik targeting botnet used a buffer overflow though.
Probably get some groans here but I rather liked pascal. It was mega explicit.
But Linux has gone backwards related to Unix in several ways.I partially agree, and partially disagree. The kernel is definitely gone forward. A lot of services considered part of the OS have gone backwards.
And it's become a showcase of poor practices. That makes it harmful.I don't think the existense of distributions using horrible practices is enough to label the entire OS harmful.
Awww, sorry. You are right, buffer overflows were a thing of November 1988.Stop trolling. You know that's not what I am saying.
Still Linuxisms have harmed the portability of most software targetted for Unix systems. Now it's targetted for Linux. That's harm in my book.Just because crappy people do crappy stuff with some tools, does not make the tools themselves crappy in my book.
Probably get some groans here but I rather liked pascal. It was mega explicit.I LOVED Pascal, and found that there was a LOT less debugging required than some other languages. Once I got the program to compile without errors, it often worked, or was VERY close to working correctly.
My point being, you cannot force crappy people to do good work by having the compiler/language/OS do it for them. They'll just invent new, possibly worse ways, to be crappy. As long as users are happy with crappy tools, crappy tools get produced. I don't think we can fix *that*. What we can do is learn, and create something better, now, for *ourselves*.
All the cool modern IT services that we saw prototypes of in the late 90's and were expecting to get commonplace, they just don't work anymore. They lag, they crash, they give server error messages, then I need to get a phone and call the helpdesk to resolve the issue.Exactly.
I'm sitting here on 10.14.1. Totally happy.
It's all still labelled. You just have to right click the toolbar and turn it on now:
(https://i.imgur.com/Dk0VnmE.png)
Also dark theme now which is much easier on the eyes.
Edit: UI has hardly changed! 10.4:
(https://i.imgur.com/U9HSqqZ.jpg)
I would hire a nerdy C/linux hacker producing ugly buggy spaghetti code with myriads of buffer overflows, over the more modern status quo in the "professional" software market, any day, no doubt.
Avoiding buffer overflows in C is trivial. Just read and understand the language standard.No, the real trick is never making mistakes.
Yes, agreed. I was recently bitten by a silly mistake. I have a pointer to a counter that I want increment...Avoiding buffer overflows in C is trivial. Just read and understand the language standard.No, the real trick is never making mistakes.
*ptr++Yeah, that mistake bit me... So, the OS goes?But Linux has gone backwards related to Unix in several ways.I partially agree, and partially disagree. The kernel is definitely gone forward. A lot of services considered part of the OS have gone backwards.
I don't think it's only a matter of some distributions. There are ways in which the Linux crowd, as a whole, hasn't got the good principles right. From those problems (example: Linux is a kernel, not an OS) you can derive other poor practices.And it's become a showcase of poor practices. That makes it harmful.I don't think the existense of distributions using horrible practices is enough to label the entire OS harmful.
I know, just pulling your leg ;) The current wave of "IoT" crap is not Linux fault at all, but crappy developers who just want the software running and won't care about consequences. Same thing happens with high end instruments running Windows.Awww, sorry. You are right, buffer overflows were a thing of November 1988.Stop trolling. You know that's not what I am saying.
Yes, C does not have native buffer overflow detection or bounds checking. This is a big problem, because C developers do not bother to do bounds checking themselves. We should have fixed this ages ago.I don't think C and Unix have been bad. Quite the contrary. But C is a systems programming language pretty similar to programming in assembler. The problem is, couple that with a heavily networked world in which your programs won't be communicating only with trusted sources and you have the recipe for disaster.
Yet, even fixing it would not affect the security of Linux-based appliances (routers, media players, TVs, and so on) much, because the manufacturers tend to misconfigure their services, especially leaving "hidden" logins with fixed usernames and passwords.Not only that, but extremely poor practices. But they would have botched it with any other OS. You are right.
Again, phylosophical differences are key here.Still Linuxisms have harmed the portability of most software targetted for Unix systems. Now it's targetted for Linux. That's harm in my book.Just because crappy people do crappy stuff with some tools, does not make the tools themselves crappy in my book.
Wasn't Booch the asshat responsible for UML and RUP?Oh. I remember when software engineering was about designing better software, modeling it better and understanding it.
I have no problem with object oriented programming. Most of the time programming against structures is OO masked poorly. Sometimes it's the only way to scale a large problem domain up effectively but you do have to know your shit to put a large bit of software together effectivel and without ending up with 80Mb clock applications.That's why I cringe when I see that OO is introduced too early into the curriculum.
Quick scan through that while eating breakfast lines up with my understanding of the universe. I use a similar method myself. Most of what I build these days is task and message oriented connected with queues for a different reason. It does help with architecture and decomposition of the problem domain but one of the cool things is the designs usually automatically lend them selves to horizontal scalability.Look at the date :)
I’ll have a read in depth later on train but looks spot on to me.
I don't think it's only a matter of some distributions. There are ways in which the Linux crowd, as a whole, hasn't got the good principles right. From those problems (example: Linux is a kernel, not an OS) you can derive other poor practices.It is not a single cohesive crowd, either. The conflicts between the kernel developers and the GNU C compiler (doing something completely un-useful just because the standard leaves it up to implementations) and the GNU C library (not exposing all Linux kernel interfaces, especially thread IDs, which could be very useful for Linux-specific application programmers) reoccur regularly.
Again, phylosophical differences are key here.Yeah, or semantics.. as in what exactly is meant by "considered harmful" in the thread title. I see the term strong enough to require one demonstrate actual harm that would be avoided by not using it at all, rather than base it on comparison to imaginary things or speculation.
I really think that nowadays everything is going down the drain with too much focus on mastering huge libraries rather than fundamental principles.I fully agree.
My extremely worn copy of "Standard C" by Plauger and Brodie has a note by the summary of gets(3c) , AVOID. Unless I have read it that day, I *never* call a library function without reading the summary of the usage if I am doing serious work.I do the same with the Linux man pages online (http://man7.org/linux/man-pages/). Don't let the name fool you; each page has a Conforming To section specifying which interfaces are C89/C99/C11, POSIX.1, BSD, or something else. The Description and Bugs sections are particularly useful.
The thread started after reading all manner of stuff which was specific to some particular version of some particular distro neither of which was stated.Yup; sometimes you just need to poke a bit to get an interesting discussion going.
The Gnu tools were much better quality 30 years ago than they are now.I don't really know, considering how different the typical use cases and hardware is today. Some of them, like make, sed, awk, and m4, are only better; I don't recall encountering a bug in any of them. GCC and glibc are really hard to compare, because they've changed so much. (Should we compare the number of bugs, or their density? Number of issues? Users affected? Likelihood for a single developer to run into an issue? I do not know, and don't want to rely on my memory or "gut feeling": I'm too often wrong when I do that.) Just think about the amounts of data current computers shuffle around, on a near-constant basis. It is staggering, in terms of 30 years ago.
I feel fairly certain that very few of the current Gnu developers have read the seminal Unix papers from the 70's.I'm certain they have not, because many devs now openly decry the Unix philosophy (https://en.wikipedia.org/wiki/Unix_philosophy) and KISS principle (https://en.wikipedia.org/wiki/KISS_principle)/software minimalism (https://en.wikipedia.org/wiki/Minimalism_(computing)).
#!/bin/bash
i=0
while true; do
echo "Hello No: $i"
i=$(( i + 1))
sleep 1
done
i=0; while true; do echo "Hello No: $i"; i=$(( i + 1)) ; sleep 1; done
If the effort that's gone into Linux had gone into Minix instead, we'd have a really good system.If people put the money they use for purely visual cosmetics (makeup) into space exploration, we'd have had a manned station on Mars for at least a decade already. (I won't say makeup is harmful, though. :D)
But it's not just computing. It is society.True. Each passing year, Idiocracy comes a step closer to reality. Or maybe it is just that as we age, we shed our misconceptions and naive beliefs we had when we were younger, and see the world more clearly.
Yet.. Between 2002 and 2012, those living under the poverty line reduced from 26% to 13% globally (https://en.wikipedia.org/wiki/Poverty_reduction#Sustainable_Development_Goals_(SDGs)). Global poverty was halved in a single decade, and it is quite possible we can eliminate it in the next couple of decades.
Virtualization is just another fad that hides the problem of poor architecture and isolation. Watch out for when you think it is solved, things get worse. First we had enterprise crap that was shipped as VM images. That wasn't terrible. Then we realised we could archive old desktop apps. Which was a good thing.
But now it has gone to hell and we have some unholy pile of shite that turns up on your doorstep as a whole bunch of docker and kubernetes/swarm poo.
I work with a lot of young developers and startups. Idiocracy is here already. Big time. I’m usually the guy who has to dig them out of the shit.You get to do that? You need to tell me how you got there. Me, I get told that the shit is just warm and fiiine, come on in and stop complaining.
On top of that a lot of software companies are MBA driven and the engineering side is seen as a cost centre rather than the core of the business.Here in Finland, about one third of large IT projects actually complete; the rest just fail, producing nothing usable. This is seen as perfectly normal and understandable; no CYA needed.
Then I realised that wasn’t their goal. So now I concentrate on extracting as much money from them as possible because the less they have the less they can hurt people.I realized that when I was 25, having run a small IT company with my eldest brother for a couple of years. I was too naive, and thought if I was even more better and effective and honest, my clients would appreciate it, and things might change. I was utterly wrong, of course, and broke myself mentally from doing so. The costliest mistake I ever made for sure.
As for poverty, I think they redefined the line. I take my kids to school with people on food bank handouts.Could be.. but people no longer starve because of natural resource shortages; they only starve if they live in a country or region controlled by dictators (warlords, socialists, or communists). Children do now have the opportunities people did not have in the past, globally.
Now I have no problem with immigration for sure but the reason it works is that the immigrants and outsourcers get paid less. They shouldn’t.Do not get me started on ethnic replacement projects, that rant is not nice. It is definitely going on here, at least. I personally no longer have access to professionals to help me untwist my mind so I can do real work again, because those services seem to be focused to "paperless" now. Because of my past, I cannot bring myself to borrow enough money to do it on my own; that kind of risk would be too much for me to mentally handle: self-defeating from the get go. Anyway, I don't want a monoculture, and I don't want to see any single culture vanish, including my own. I do want to change many facets of many cultures, though, because I can see the damage they cause to everyone. Because I do not give a shit about protecting groups and insist on dealing with individuals on an individual basis, judging them solely based on their behaviour, I constantly risk being labeled a racist, and being excluded from participation and employment.
I work with a lot of young developers and startups. Idiocracy is here already. Big time. I’m usually the guy who has to dig them out of the shit.You get to do that? You need to tell me how you got there. Me, I get told that the shit is just warm and fiiine, come on in and stop complaining.
I know I overshare and am overly direct and lack the charisma needed to convince nontechnical people, though. You certainly have much better people skills than I do.
On top of that a lot of software companies are MBA driven and the engineering side is seen as a cost centre rather than the core of the business.Here in Finland, about one third of large IT projects actually complete; the rest just fail, producing nothing usable. This is seen as perfectly normal and understandable; no CYA needed.
Then I realised that wasn’t their goal. So now I concentrate on extracting as much money from them as possible because the less they have the less they can hurt people.I realized that when I was 25, having run a small IT company with my eldest brother for a couple of years. I was too naive, and thought if I was even more better and effective and honest, my clients would appreciate it, and things might change. I was utterly wrong, of course, and broke myself mentally from doing so. The costliest mistake I ever made for sure.
As for poverty, I think they redefined the line. I take my kids to school with people on food bank handouts.Could be.. but people no longer starve because of natural resource shortages; they only starve if they live in a country or region controlled by dictators (warlords, socialists, or communists). Children do now have the opportunities people did not have in the past, globally.
Now I have no problem with immigration for sure but the reason it works is that the immigrants and outsourcers get paid less. They shouldn’t.Do not get me started on ethnic replacement projects, that rant is not nice. It is definitely going on here, at least. I personally no longer have access to professionals to help me untwist my mind so I can do real work again, because those services seem to be focused to "paperless" now. Because of my past, I cannot bring myself to borrow enough money to do it on my own; that kind of risk would be too much for me to mentally handle: self-defeating from the get go. Anyway, I don't want a monoculture, and I don't want to see any single culture vanish, including my own. I do want to change many facets of many cultures, though, because I can see the damage they cause to everyone. Because I do not give a shit about protecting groups and insist on dealing with individuals on an individual basis, judging them solely based on their behaviour, I constantly risk being labeled a racist, and being excluded from participation and employment.
A couple of decades ago, I remember talking to a professor about patents in a poster session. (I had almost applied for a basically software one the year before, related to online editing of web pages, and the underlying mechanisms for doing so [that are not used even today, funnily enough].) I had observed to the professor that majority of patents are used to block competing products from becoming available, rather than protecting available products from being copied. (Not just in physics, but also in engineering. Crankshaft patent delayed automobiles for twenty years, for example. Without certain basically unused additive manufacturing patents, we could have had 3D printers in late eighties.) Because of that, I dislike the current patent system, and would like to see it replaced with something that protects products incorporating the patents instead: a scheme of "use it or lose it", if you will. A student indignantly called me a communist, and said they had "the right to profit from their ideas". I think I sprained my brain somehow observing the situation, being a CEO of my own small company, being labeled a communist, by an ostensibly sharp and intelligent student, wearing a Che Guevara shirt (figuratively speaking; no tuition fees in Universities in Finland), claiming that the society owes them money and resources, because she has "ideas".
A few years ago, I was still hoping I could get employed in my University, developing a new molecular dynamics simulator (that has a core design that can actually utilize the hardware we have now, with a number of different potential models and simulation regimes, both parallelized and distributed, separating the bits scientists want to fiddle with (the potential models) into units that let them do so without compromising the efficiency of the simulation, easily). Materials science needs good, reliable, valid simulators. Unfortunately, even the HPC stuff seems to be under high pressure to be outsourced to the cloud. It seems it is cheaper to pay millions for service providers than pay a cheap undergrad/grad a salary to fix and develop the tools. So now, I really don't have a plan anymore.
I am not kidding when I say I would gladly take a one-way trip to Mars, if I got some equipment to do real exploration and analysis there for a year or two before perishing.
At least you guys have free university. We don't any more. My eldest is going to apply next year so I've got to foot the bill for that. She's wants to do molecular biochemistry which is going to cost a small fortune.Can't she learn German? Make use of those EU bennies while they last.
That has been considered actually. We all looked at moving into Europe but it’s too much to move 5 people.Or come to Spain ;)
It’s too hot and I don’t like Paella ;)Txipirones en su tinta? ;)
On top of that a lot of software companies are MBA driven and the engineering side is seen as a cost centre rather than the core of the business.
That has been considered actually. We all looked at moving into Europe but it’s too much to move 5 people.Or come to Spain ;)
Edit: this turned into a long rant the moment idiocracy was mentioned. Sorry.
(SNIP)
Ugh it all sucks. 5 years of this shit left and I can retire (early) and write a book decrying the whole thing Dilbert-style.
My normal environment has been SunOS/Solaris/OpenIndiana for 26 years. I stick with it because it has a tradition of being very conservative about change. The traditional Unix mindset.
More and more universities in Europe are offering programs fully in English. (Especially at masters and PhD levels, but some at bachelor’s level, too.) It’d be worth looking into...At least you guys have free university. We don't any more. My eldest is going to apply next year so I've got to foot the bill for that. She's wants to do molecular biochemistry which is going to cost a small fortune.Can't she learn German? Make use of those EU bennies while they last.
Learning German to a STEM academic conversational/writing level isn't that hard starting from English ... it's below high school level after all.Umm... no. Absolutely not. You also need to be able to read academic writing, and that takes years to achieve with German, coming from English. German academic writing is very dense and difficult to parse. Basic German skills do not cut it.
Learning German to a STEM academic conversational/writing level isn't that hard starting from English ... it's below high school level after all.Umm... no. Absolutely not. You also need to be able to read academic writing, and that takes years to achieve with German, coming from English. German academic writing is very dense and difficult to parse. Basic German skills do not cut it.
That’s a pet hate of mine. Someone mailed me a stack trace the other day that was 42k of text :--
Just like German, where, by the time you’ve reached the end of a sentence with a subordinate clause and the verb finally deigns to reveal itself, your brain’s registers have overflowed and lost the subject and/or object of the sentence, so you have to start over! ;DThat’s a pet hate of mine. Someone mailed me a stack trace the other day that was 42k of text :--
And there is one row of "can't find a comma in file so and so"; the rest is just the jvm throwing its hands in the air and having a fit.
In this case it was a null ref in a method which had about 3000 lines of code in it :palm:Bring back processors with a 256 byte stack :P
Reply to developer: fuck off and add some assertions!
... Bring back processors with a 256 byte stack ...256 bytes !!! PPFFTTTT Luxury !! :-)
Bring back processors with a 256 byte stack :PThat's how much total RAM I had in the first processor I wrote software for.
If you feel that GNU/Linux became Windows, blame Apple.It's a widely known fact. Haven't you notice the particular smell of brand new Apple gear?
Seriously, blame Apple. There is a recent trend among the Linux folks to implement as many components Apple style as possible. This all started when Apple released their iPhone and after Android operating system dropped. That resulted in a significant shift of Linux user base, from mostly professional users and developers in late 2006 to mostly plain Janes and Joes in late 2010. Consider this the permanent September moment for Linux.
Apple's implementations work properly though :-DDApple's implementation is NOT Linux though. It is Mach kernel + IOKit driver framework + a lot of FreeBSD kernel-level and non-graphics userland components + Apple's proprietary graphics stack. macOS and FreeBSD still exchange code often to this day.
(spent half the morning dealing with a fucked up systemd + mdraid box)
It's a widely known fact. Haven't you notice the particular smell of brand new Apple gear?I think it is the money talk that lead Linux down that path. For a while (especially before Android overtook iOS as the most popular mobile OS) the Linux companies felt the pressure from the shifting user base and the bean counters decided to shift the direction.
Obviously it's polluting their precious body fluids.
Apple's implementations work properly though :-DDApple's implementation is NOT Linux though.
(spent half the morning dealing with a fucked up systemd + mdraid box)
It is Mach kernel + IOKit driver framework + a lot of FreeBSD kernel-level and non-graphics userland components + Apple's proprietary graphics stack. macOS and FreeBSD still exchange code often to this day.Again, fortunately. And don't forget that FreeBSD has a really noble heritage! You mention it like it was some sort of evil alien.
Thanks god/s/flying spaghetti monster, whatever!
I think it is the money talk that lead Linux down that path. For a while (especially before Android overtook iOS as the most popular mobile OS) the Linux companies felt the pressure from the shifting user base and the bean counters decided to shift the direction.
RedHat has their hands forced when Android took off though. Due to the user base shift Linux have to shift from a professional server/workstation operating system for huge systems to a consumer-facing operating system for small devices. When Android took off patches from Google start to pour in, and those patches are all rapidly adapting consumer-facing code.I think it is the money talk that lead Linux down that path. For a while (especially before Android overtook iOS as the most popular mobile OS) the Linux companies felt the pressure from the shifting user base and the bean counters decided to shift the direction.
Linux company. There's only one. Everyone else is like flies around shit. That's Redhat.
|O yea its a mental health hazard to get linux running right with no corners cut.Not any more than mangling your fingers in a milling machine. Again, the real problem is that people aren't willing to pay for something they think should be free. They are willing to pay for Red Hat/IBM sprinkles on top, because OMG! Its RedHat! Its IBM! I love them! They're Famous! even when the sprinkles are made from barely recycled poo.
I just want a laptopFor what it is worth, both HP EliteBook 830 G3 and 840 G4 have had full driver support in current Debian derivatives (Ubuntu, Mint) since 2016 or earlier. I know because I have two loaners I use, right now running Ubuntu 16.04.4 LTS still. This particular 840 G4 has a 512 GB SAMSUNG PM961 (MZVLW512HMJP; M.2 PCIe x4 3rd gen) SSD and 16 GB of RAM. Bluetooth, wireless (both 2.4GHz and 5GHz bands), virtualization, audio, video including acceleration (VA-API) all work out of the box. The full-HD (1080p) display is okay. I have not tried either of the WWAN cards (there is a SIM slot on the side, but no WWAN cards installed on these particular ones).
I don't know. I had problems with all sorts of stuff like bluetooth drivers, etc. I just want a laptop to take some where and I find myself going on a forum for 2 hours to try to debug scripts and weird errors etc.
I always have different problems with different hardware.
I always tell them to go to an Apple store and buy an iPad. Got a problem? Book appointment at the Genius Bar, not me. They can tell you that you’re an idiot! Oh but it didn’t do what I wanted. You don’t know what you want because you don’t understand that you have to compromise somewhere. Also I don’t give a fuck, no you’re not coming over to see how I am just to get me to fix it and no I’m not even touching it because I’m out of nitrile gloves and no you can’t have the WiFi password you melt.Or buy a MacBook Pro or Mac Mini with AppleCare extended warranty if they need a "full computer", and buy an external hard drive twice the capacity of the computer's internal drives as a backup drive. Yeah it is a bit on the expensive side but if you have a problem within three years just call Apple directly. (Under AppleCare extended warranty Macs have three years of warranty repair and telephone support, without the extended warranty it is one year warranty repair and 90 days telephone support.) They built the hardware and software, and they provide, to be honest, excellent customer service to their products.
I can't recomment a Mac now unfortunately. I recently bought a nice new 2018 MBA and the keyboard started getting dicky after 3 weeks resulting in duplicate keypresses and total unusability. Straight back to Apple. Until they get over the love affair with the butterfly keyboard, I can't recommend them.MacBook butterfly keyboards are now under callback.
I also don't want them to have a mac because iOS is far more dumbass proof.