Author Topic: Gnu/Linux Considered Harmful  (Read 46106 times)

0 Members and 1 Guest are viewing this topic.

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11600
  • Country: ch
Re: Gnu/Linux Considered Harmful
« Reply #225 on: November 21, 2018, 04:21:41 pm »
Many assets will also work if simply located in the same folder as the application file. This is a way of e.g. providing a library without having to put it in the System folder.
There is nothing stopping one from writing and compiling ones applications to behave that way even now.  All you need is a small launcher (wrapper script), that tells the dynamic linker about it.
I was talking about now! :P A launcher script is only needed to force an app that you don't compile yourself to use different libraries than the default ones. (For example, to use the old AirPort Utility on newer versions of Mac OS X.)

I blame the users.  They are completely satisfied working with crappy tools that crash or occasionally corrupt their data.

Whenever I've built services or applications that I could trust, I've had to listen to an endless stream of "It doesn't need to be perfect; it just needs to look good" from cow-orkers, managers, and clients alike.  Silly bugger...
Well, users only tolerate it because they've been conditioned (let's face it, primarily by the Windows world) to expect computers to be unreliable, sucky things. (As a long-time Mac user, I am still shocked by the stuff that Windows users will put up with, like the expectation of needing to wipe their system every once in a while, with ensuing data loss.) And for this, the blame falls on shitty developers.

In late nineties, I maintained a couple of classrooms full of MacOS 7.5 machines, with Adobe Photoshop, Illustrator, and PageMaker, Macromedia Freehand, Microsoft Word, and so on.  As soon as I got the department to switch to bulk licensing, I switched the maintenance from individual machines to cloning, with the master on a CD-R.  Didn't even need any cloning software, because of the folder-based approach MacOS used: just boot from the CD-R, clear the hard drive, copy the files and folders to the hard drive, and reboot (pressing a key to rebuild the desktop database).

Saved a crapton of time, and reduced downtimes to just minutes (in case of a machine getting b0rked during a lesson).
Yup. I've supported classrooms and companies, and indeed, I did the same thing on Mac OS 9 and earlier. (On Mac OS X, I instead made disk images stored on a bootable FireWire or USB drive, or a bootable external drive and disk images on a server. Then just used Disk Utility to restore the image, which is even faster than file copying.)

Pity the department head (who refused to use a computer themselves, having a personal secretary print and type their emails for them) didn't trust me enough to give me a budget for consumables: every laser printer ink cassette, keyboard, and mouse that could not be refurbished, I had to obtain permission to buy a replacement, separately, in writing. My interactions with administrative types only went downhill from there, and is the reason why I burned out before I turned thirty.
That sucks, and I totally understand how that burns you out. People need to feel valued and trusted.
 

Offline edy

  • Super Contributor
  • ***
  • Posts: 2385
  • Country: ca
    • DevHackMod Channel
Re: Gnu/Linux Considered Harmful
« Reply #226 on: November 21, 2018, 07:38:01 pm »
I stumbled across this interesting video (sorry if it is already in the thread) regarding Microsoft vs. Linux and the "Embrace, Extend, Extinguish' strategy used by Microsoft to kill their competition. They mention VMWare, The Linux Foundation and GPL Enforcement and I wonder if anyone else has seen it and can comment on the accuracy of what this speaker is saying:



He talks about Microsoft suing companies that use Linux and VMWare being a GPL violator who tries to keep pro-GPL people off the Linux board. Essentially, that Linux is becoming over-run politically and economically by major companies who buy into the Linux Foundation and use their influence to the detriment of Linux and Open Source and Innovation and Freedom to squash it into oblivion.
« Last Edit: November 21, 2018, 07:48:09 pm by edy »
YouTube: www.devhackmod.com LBRY: https://lbry.tv/@winegaming:b Bandcamp Music Link
"Ye cannae change the laws of physics, captain" - Scotty
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23034
  • Country: gb
Re: Gnu/Linux Considered Harmful
« Reply #227 on: November 21, 2018, 07:44:01 pm »
I skipped through the slides and it’s 100% accurate. Depressing but accurate.

RH bought Linux basically. MSFT want some pie so they spread their tentacles (.net, powershell, SQL Server) onto it.

Edit: I have worked with .net since day one and SQL Server since 5.5 and was a SQL cert for many years. This is a fucking hell hole of a platform pitch. Walk away fast if you ever see it coming.
« Last Edit: November 21, 2018, 07:49:29 pm by bd139 »
 

Online Bud

  • Super Contributor
  • ***
  • Posts: 6925
  • Country: ca
Re: Gnu/Linux Considered Harmful
« Reply #228 on: November 21, 2018, 08:02:47 pm »
The presenter guy is a moron, because only a moron can ask audience a question if their Linux desktop is hard to use, at a Linux conference in a room filled with Linux nuts, fo the purpose to demonstrate to the world how easy Linux is to use.
Facebook-free life and Rigol-free shack.
 

Offline GregDunn

  • Frequent Contributor
  • **
  • Posts: 725
  • Country: us
Re: Gnu/Linux Considered Harmful
« Reply #229 on: November 21, 2018, 08:13:30 pm »
I had to use Windoze for my day to day work from about 1997 till I retired; I was familiar with both platforms since the very beginning, but there's a reason I have always chosen MacOS for personal use.  Watching people flail around trying to use MSFT software for trivial tasks, and having to do so myself, only strengthened my resolve.  Not intending to start an argument, just pointing out that I've used both a lot and it only got worse over time.  I totally understand why people wipe their PCs, because trying to remove some programs or migrate data is frustrating and painful.

But I honestly considered using Linux for a while, just to completely disassociate myself from lock-down payware.  I had a Linux box set up beside my Windoze machine at work for years, but no matter what I tried, I just couldn't get everything to play nicely.  As touched on earlier in this thread, the problem with open-source in the real world is that people often don't care about compatibility and just write software which does what they need.  They also are less concerned with usability than they should be; I tried pretty much every GUI for Linux I could find, and they all had major flaws which constantly reminded me I was running a GUI on top of a headless server.  All the programs had different approaches to the UI design, whether it was shortcuts or philosophical divergences, and let the X controls show through in odd or disturbing places.  We used it heavily in our server environment, where it was well suited to the business needs, but as a personal environment it was generally a disaster.  Like my distaste for Java, I figure using something for 20 years is a valid base for deciding it's not a good solution for me.

But then, I'm the guy who worked for AT&T and uses Verizon cell service.  I have no corporate loyalty.   >:D
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6285
  • Country: fi
    • My home page and email address
Re: Gnu/Linux Considered Harmful
« Reply #230 on: November 21, 2018, 08:21:18 pm »
Many assets will also work if simply located in the same folder as the application file. This is a way of e.g. providing a library without having to put it in the System folder.
There is nothing stopping one from writing and compiling ones applications to behave that way even now.  All you need is a small launcher (wrapper script), that tells the dynamic linker about it.
I was talking about now! :P A launcher script is only needed to force an app that you don't compile yourself to use different libraries than the default ones. (For example, to use the old AirPort Utility on newer versions of Mac OS X.)
Me too; I meant that an application on any OS can be designed to do that.  On some OSes, the launcher script is needed; for example, on Linux, to set certain environment variables (LD_LIBRARY_PATH, specifically).

People need to feel valued and trusted.
It is more complicated than that.  Supervisors/bosses expressing mistrust is damaging.  Denying any opportunity to show how the changes done affect users, and why the systems work so well that users forget they exist, is just petty.  I would say that being trusted and valued is definitely a good thing, but their absense is quite tolerable too; it is the unfounded expressions otherwise without any recourse for correcting the misconceptions, that damage ones psyche.



The thing about Linux is that you really need to make it conform to your workflow.  There is no "default" workflow, like the one most proprietary applications and OSes provide; it's just a mismash of things random developers use.  For those who want a ready-to-use tool, that mishmash is horrible, and makes them see Linux as something disastrous, unusable.  That modification work really must be calculated as a cost.  In my experience, if properly done, that cost is recouped quite quickly, no matter how big the cost might seem beforehand.

The only limitation to creating ones own Linux distribution is the sheer amount of software that a typical Linux workstation has, and the fact that a lot of them need patches to fix their behaviour.  It is already more than one person job.
 
The following users thanked this post: tooki

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6723
  • Country: nl
Re: Gnu/Linux Considered Harmful
« Reply #231 on: November 21, 2018, 10:31:40 pm »
I dislike that he rags on Mir, but that he doesn't give the same treatment to Wayland. When the Wayland developers decided to just double down on the erosion of network transparency instead of fixing it I kinda lost hope for Linux ... with the speed of modern computers the overhead of network transparency for the basic GUI has become less relevant, not more, so just fix that shit and continue to use DRI for anything which actually needs local performance (ie. video and 3D). Instead they give up network transparency to make their lives a tiny bit easier, it's a great example of unlinuxy linux bullshit.

When he rags on Chrome OS for being locked down is the moment he loses credibility. Take out the write protect screw and they are one of the most hackable modern laptops in existence. Really the only competition for Purism if you want a modern laptop with a mostly open source bootloader and BIOS.
 

Offline Tepe

  • Frequent Contributor
  • **
  • Posts: 572
  • Country: dk
Re: Gnu/Linux Considered Harmful
« Reply #232 on: November 23, 2018, 02:50:13 pm »
Nope user data resides elsewhere.

here is my concept. A harddisk has 3 folders
- OS
- USER
- APP

When you install an application a single file is saved to the APP folder. Everything an APP needs is contained in that file. ( think of it as a ZIP file : it contains an internal file system with all the subfiles it requires. )
In USER there is also an APP folder . That contains the <application>.SETTINGS file. The APP can only write to its own SETTINGS file (the OS governs that. no stepping out of bounds. APPS can only write to their own .SETTINGS file. The APP package contains a DEFAULT.SETTINGS. when a user lauches the application for the first time that one is saved to the users space ( again the OS does that, not under control of applications)

so
- USERS\ME\APP\excel.settings
- USERS\MYWIFE\APP\excel.settings
- APP\EXCEL.APP  <- this contains everything required to run excel , including a default.settings file.

The OS and APP folders are READ only for applications. Applications can only read their own .APP file. No peeking in other files or in the OS folder.
Applications must reqister a file extention during install. They can only WRITE to their registered file extensions. they can read any other data file in \users , so they can always import data from other applications, but they can only muck up their OWN data files.
How do you produce new applications in such an environment ("No peeking in other files")?
 

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11600
  • Country: ch
Re: Gnu/Linux Considered Harmful
« Reply #233 on: November 23, 2018, 07:13:31 pm »
here is my concept. A harddisk has 3 folders
- OS
- USER
- APP

When you install an application a single file is saved to the APP folder. Everything an APP needs is contained in that file. ( think of it as a ZIP file : it contains an internal file system with all the subfiles it requires. )
In USER there is also an APP folder . That contains the <application>.SETTINGS file. The APP can only write to its own SETTINGS file (the OS governs that. no stepping out of bounds. APPS can only write to their own .SETTINGS file. The APP package contains a DEFAULT.SETTINGS. when a user lauches the application for the first time that one is saved to the users space ( again the OS does that, not under control of applications)

so
- USERS\ME\APP\excel.settings
- USERS\MYWIFE\APP\excel.settings
- APP\EXCEL.APP  <- this contains everything required to run excel , including a default.settings file.

The OS and APP folders are READ only for applications. Applications can only read their own .APP file. No peeking in other files or in the OS folder.
Applications must reqister a file extention during install. They can only WRITE to their registered file extensions. they can read any other data file in \users , so they can always import data from other applications, but they can only muck up their OWN data files.

If i need to move to different hardware  : i take the APP file and fling it on the other machine. when i first launch it the OS will create a new .SETTINGS file  , if i copied over the .SETTINGS file  it will use that one.

The OS contains functions to safely move .APP and .SETTINGS file on and off a machine.

How neat would that be. No more viruses , no more runaway programs that overwrite their own , or other programs files. No more data snooping ,They simply can't programs only see their own files contained in their .APP file and that file is read only. They can only write to their own .SETTINGS files and write to approved file extensions.
What you've described is a modern sandboxed app environment. That's nearly exactly how modern apps work on, for example, iOS. Sandboxed apps on macOS work much the same way.
 

Offline djacobow

  • Super Contributor
  • ***
  • Posts: 1154
  • Country: us
  • takin' it apart since the 70's
Re: Gnu/Linux Considered Harmful
« Reply #234 on: November 24, 2018, 12:53:00 am »
How do you produce new applications in such an environment ("No peeking in other files")?

To a first approximation, you don't. You can't for iOS on an iPhone. You need the "messier" environment of a, errm, uh, real computer. For iOS, you'll need to run xcode, which means you'll need a Mac. (I understand there are ways to get around this to use Windows, but I'd be surprised if you could do it from say, an iPad).

I believe this is generally possible for Android using something like AIDE, but I do not know how practical it is. I can't see writing a lot of Java without a proper keyboard, but of course you can always plug one int.


 

Offline borjam

  • Supporter
  • ****
  • Posts: 908
  • Country: es
  • EA2EKH
Re: Gnu/Linux Considered Harmful
« Reply #235 on: November 26, 2018, 09:50:58 am »
How do you produce new applications in such an environment ("No peeking in other files")?

To a first approximation, you don't. You can't for iOS on an iPhone. You need the "messier" environment of a, errm, uh, real computer. For iOS, you'll need to run xcode, which means you'll need a Mac. (I understand there are ways to get around this to use Windows, but I'd be surprised if you could do it from say, an iPad).

I believe this is generally possible for Android using something like AIDE, but I do not know how practical it is. I can't see writing a lot of Java without a proper keyboard, but of course you can always plug one int.

There is a lot of confusion around this issue. I think several issues are mixed up here.

First, it's the job of the operating system to authorize or prevent access to a resource. It has nothing to do with the development environment. If that was the case it would be trivial for malware to bypass restrictions, wouldn't it? You won't assume that a malware writer or a dishonest coder will play nice!

The key issue here is the operating system security model. We have grown used to the Unix security model and others inspired on it. The thing is, it's obsolete for our current applications.

Unix is a multi user operating system. It was conceived to allow different users to share a computer and to offer some simple protection to their resources (files, processes, etc). So Bob could prevent Alice reading his files.

Now the threat model is different. Unix based computers are mostly single user workstations. And the risk is a security problem in an application program (the typical text processor security issue) giving an attacker access to your other files.

While many people discussing security focus on how hard it is to achieve superuser access to a Unix system, reality is different. It has become irrelevant if all the attacker wants is to read your files, use your computer to send spam, mine crypto currency or a myriad other uses. The miscreant wants to be able to do the same you would do as a regular user.

Can this problem be fixed? Of course. But it's a matter of operating system architecture, not of just clever tricks and patches. Unfortunately, as Pike rightly lamented, operating systems research is largely being ignored. Systems such as Plan 9 (using name spaces) or Amoeba (using capatibilities) offered some tools to tackle this problem.

We should go from a multi user operating system to a multi application operating system in which an application doesn't get automatic access to all the resources owned by the user running it.

This is a superficial example of one possible model.

Imagine you are running Photoshop. You want to open one of your photographs. So you select File -> Open, and a directory listing appears. You select a file and press "Open", so you open it.

In the "traditional" multi user operating system this is what happens when the user wants to open a file.

  • The user selects File->Open
  • The application creates a file selection window
  • The user selects a file and presses "Open"
  • The application opens it. Note that the application can open any file it "wants" in this case.

The file selection dialog is part of the application process, running with the same privileges, hence with access with all the files. It's very convenient for many developers because they can implement their own file selection window. But of course it has a problem: this doesn't give us that isolation between applications.

Now, imagine a "multi application" OS.
  • The user selects File->Open
  • The application sends a message to the OS asking for a file to be opened.
  • The OS opens a file selection window which is not a part of the application process.
  • The user selects a file and presses the "open" button.
  • The OS creates an authorization for the application to open that particular file. Let's say it´s some kind of "ticket".
  • The application opens the "ticket".

Something similar would happen if the user picked up the file icon and dragged it onto the application icon. The trusted part of the OS would deliver a message to the application. "Open this file" but instead of giving a classical file descriptor as a parameter it could be an authorization ticket.

It's a bit more complicated than this of course, because the OS should have adequate controls for UI automation operations done from any application. But the core mechanism would still be something of the sort.

Actually there is some evolution towards this model. I remember when Apple was ridiculed because iOS doesn't separate processes in different user ids. The truth is, it uses other mechanisms which means that the user id is irrelevant. Also they are applying harder controls on OS X based on sandboxes.

A question remains, however: a sandbox is what I call an "a posteriori" security measure. You have a set of resources and you establish some protection measure as an afterthought. Actually I would rather prefer the "a priori" approach in which everything is protected unless access is explicitly granted.

But that means you need a complete overhaul of the security model and it would surely be painful for many application developers. There is a lot of inertia right now.
« Last Edit: November 26, 2018, 09:52:48 am by borjam »
 
The following users thanked this post: Tepe

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23034
  • Country: gb
Re: Gnu/Linux Considered Harmful
« Reply #236 on: November 26, 2018, 10:13:39 am »
Completely agree. Best solution I have seen is the unveil(2) system call in OpenBSD: https://man.openbsd.org/unveil.2

Every process should only see what it needs to do its work. Then you don't get things like rogue firefox addins writing into your profile scripts (this is currently possible - I have demonstrated it with an addin zero day).
« Last Edit: November 26, 2018, 10:15:44 am by bd139 »
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6285
  • Country: fi
    • My home page and email address
Re: Gnu/Linux Considered Harmful
« Reply #237 on: November 26, 2018, 02:06:11 pm »
You guys are clearly talking about graphical user interfaces only.  When I do real work, I usually use command-line commands to pre/postprocess huge data files, and often chain commands.  What you are talking about, does not suit my workflow at all.

However, when I browse the web, or download and open a document, the suggested restrictions would be very useful.  They are not even that difficult to implement -- except for the sheer number of libraries those applications rely on, and their need to be able to read from and write to a large number of files (though mostly in personal preferences) and sockets.

In fact, Linux already provides the necessary OS/kernel level support for this, seccomp(). Simply put, a library can install filters that restrict which syscalls the kernel will allow to run.  It is also possible for the widget which the human user uses to choose a file, to be a completely separate process, which provides the target file as a descriptor/file handle to the application.  My point is, this model is already possible in Linux and OpenBSD at least. It is not an OS problem; they can already do this.

The problem is, nobody is willing to put up the resources and requirements for application and library developers to do this. The pool of developers paranoid enough (to not trust even their own code to behave, necessary for this kind of development) is also pretty small, most developers believing that error checking and security is something that can be added on top later on if there is additional money for it.

So, that discussion is really unfair/off-topic here, considering the thread title, and that GNU/Linux is one of the OSes that would allow that approach right now, with the necessary kernel-level support to make it robust.

For a really robust approach, one would rewrite the desktop environment (starting at the X server, changing the protocols as well). The network/socket comms protocol is optimal for this, because it allows clear isolation barriers.  All applications would be started via a wrapper that limits the syscalls available. (These limitations are enforced by the kernel, and cannot be undone; all threads and child processes created will inherit the limitations.) Files and sockets would only be available by request from an arbitrator process, part of the desktop environment, and completely under user control.  (When the application wants to open a file, it can do so only by requesting the descriptor from the arbitrator process; the arbitrator process would display the file dialog instead of the application. Read-only access to application-specific configuration files could be granted automatically, based on the current user and the application/executable). Internal computation would not be affected at all, so the overhead of the arbitration/access controls would be insignificant.

Funnily enough, something similar to this already exists for Linux, developed by NSA: SELinux.  It is obviously more oriented towards services than applications, but it does assign "security labels" for each service (application) and each file and directory, and tightly controls access across "security labels".  Thus far, nobody has bothered to construct a working policy for a desktop environment, that's all.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23034
  • Country: gb
Re: Gnu/Linux Considered Harmful
« Reply #238 on: November 26, 2018, 02:08:04 pm »
I think it's because it's not interesting doing that. And Linux is very interest driven. Everyone wants the sexy jobs.
 

Offline borjam

  • Supporter
  • ****
  • Posts: 908
  • Country: es
  • EA2EKH
Re: Gnu/Linux Considered Harmful
« Reply #239 on: November 26, 2018, 02:16:11 pm »
You guys are clearly talking about graphical user interfaces only.  When I do real work, I usually use command-line commands to pre/postprocess huge data files, and often chain commands.  What you are talking about, does not suit my workflow at all.
Pointing at me, you? Tou dare? Eh? Eh? ;)

Now seriously. The file selector thing is an example. A shell can do the same job when expanding wildcards or just accepting arguments. The basic concept is the same.

Quote
However, when I browse the web, or download and open a document, the suggested restrictions would be very useful.  They are not even that difficult to implement -- except for the sheer number of libraries those applications rely on, and their need to be able to read from and write to a large number of files (though mostly in personal preferences) and sockets.
Of course there are files that must be accessible for any application like system libraries, etc. On the other hand, on Plan 9 you could run an application without any network access if needed. Just prune its namespace and voila! No network.

Quote
In fact, Linux already provides the necessary OS/kernel level support for this, seccomp(). Simply put, a library can install filters that restrict which syscalls the kernel will allow to run.  It is also possible for the widget which the human user uses to choose a file, to be a completely separate process, which provides the target file as a descriptor/file handle to the application.  My point is, this model is already possible in Linux and OpenBSD at least. It is not an OS problem; they can already do this.
It is an OS problem because doing that is incredibly complicated. That´s why OS architecture matters, not just facilities. That statement is comparable to saying "why do you program in C++ when you can do anything in assembler?" ;)

Quote
The problem is, nobody is willing to put up the resources and requirements for application and library developers to do this. The pool of developers paranoid enough (to not trust even their own code to behave, necessary for this kind of development) is also pretty small, most developers believing that error checking and security is something that can be added on top later on if there is additional money for it.
So again it's the task of the OS architects to provide a proper security architecture, something a bit beyond what was good in 1970.

Quote
So, that discussion is really unfair/off-topic here, considering the thread title, and that GNU/Linux is one of the OSes that would allow that approach right now, with the necessary kernel-level support to make it robust.
Indeed it is, I am not criticizing Linux for trying to be another Unix. However, the security issues somewhat came up.

Quote
Funnily enough, something similar to this already exists for Linux, developed by NSA: SELinux.  It is obviously more oriented towards services than applications, but it does assign "security labels" for each service (application) and each file and directory, and tightly controls access across "security labels".  Thus far, nobody has bothered to construct a working policy for a desktop environment, that's all.
However it's not so easy. I have used the MAC subsystem on FreeBSD (more or less same thing) and I had some funny and intended consequences, such as processes that despite escalating to root would still be harmless.

But still it's a far from complete solution, running PHP on that demanded modifying some low level PHP code (not difficult at all but not good!) and most traditional Unix programs will just break.
« Last Edit: November 26, 2018, 02:20:08 pm by borjam »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6723
  • Country: nl
Re: Gnu/Linux Considered Harmful
« Reply #240 on: November 26, 2018, 04:04:23 pm »
I think the invention of C/Unix were far more harmful to IT than beneficial. Many people celebrate both, but should we really?

C is a perfect language for hackers, both in the traditional and modern sense ... what benefit has that had for us? Happier developers, slightly faster software and trillions of dollars worth of damages from buffer overflows and use after free. There were more principled languages competing with it at the time, but they all got blown away by the performance advantage of portable assembler except for a few niches where people realized that C is fucking retarded for security and the performance advantage is rarely worth it (ie. the US military love affair with Ada).

Similarly for Unix ... capability based security had a long history before the recent resurgence, without Unix it might have very well won out over ACLs. Again I think it would have been a vast improvement.

tl.dr. C/Unix were a disaster.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23034
  • Country: gb
Re: Gnu/Linux Considered Harmful
« Reply #241 on: November 26, 2018, 04:34:18 pm »
I wouldn't say they were a disaster, but certainly far from perfect.

The problem however is that there is nothing better with any momentum and maturity and these are what drives things really at the end of the day.

You can make a far better bit of technology but you will get no adoption.

I was rather excited to see the development of a magic bullet for this, ARX, back in the late 1980s. Basically Unix with mach kernel, Modula-2+ as the userland language, NeWS. The lot. On a desktop machine with an ARM CPU that you could literally go and buy in the shops in the UK. This was to be delivered in 1987 which was a MASSIVE leap like none ever before in computing power. Going from some 8 bit turd to that was insane.

The vendor, Acorn, had to dump it in favour of something some game programmers hacked up in a couple of months (RISC OS which was mostly written by AcornSoft programmers) because they couldn't actually ship an OS like that in any reasonable amount of time.

I like to think the world would be different if they delivered ARX.

Got this instead which was pretty amazing anyway so I bought the machine anyway!

« Last Edit: November 26, 2018, 04:36:19 pm by bd139 »
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8184
  • Country: fi
Re: Gnu/Linux Considered Harmful
« Reply #242 on: November 26, 2018, 05:48:08 pm »
I think the invention of C/Unix were far more harmful to IT than beneficial. Many people celebrate both, but should we really?

As one writing everything in C (and sometimes seeing it's the wrong choice for a particular task, but still doing it...), I understand your point very well, I think you have every right to call C and Unix harmful or even disastrous. But, some extremely complex things are being done with these tools. This "more harmful than beneficial" argument is impossible to substantiate. What would we have instead, how wide-spread, would it be better, or would it be even worse? Maybe we would be years ahead on computing? Maybe we would saved billions of damage, even human lives? Or, maybe we would be back to stone age, and just use computers less because they would suck even more than they do now?

Such alternative universe doesn't exist for direct comparison, and it could be very much different, for very much better or worse.

Many modern alternatives have been more and more disastrous. I'll take a buggy and slow-to-write C program with array over-indexing and overrunning a null terminated string and multiple ways of unintuitive unexpected behavior shit every day, instead of the even buggier, and even slower to write "but-this-is-the-trend-bloat-framework-of-the-year-so-it-can't-be-bad" bullshit.
« Last Edit: November 26, 2018, 05:59:08 pm by Siwastaja »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6723
  • Country: nl
Re: Gnu/Linux Considered Harmful
« Reply #243 on: November 26, 2018, 06:13:37 pm »
I disagree, the framework churn and verbosity of Java might have harmed the sanity of a lot of developers ... but as far as exploits is concerned the software written with it has been pretty good (Java sandbox exploits are irrelevant to that). It's the quick and dirty hacking languages (Javascript, PHP, etc) which cause most of the problems.

Trusting application developers to dictate language design is like trusting rich people to dictate economic policy ... they will generally rationalize to serve their self interest. There are always exceptions of course, but all in all it's a bad idea. They overestimate the importance of their comfort and work speed.
 

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8518
  • Country: us
    • SiliconValleyGarage
Re: Gnu/Linux Considered Harmful
« Reply #244 on: November 26, 2018, 08:02:37 pm »
Nope user data resides elsewhere.

here is my concept. A harddisk has 3 folders
- OS
- USER
- APP

When you install an application a single file is saved to the APP folder. Everything an APP needs is contained in that file. ( think of it as a ZIP file : it contains an internal file system with all the subfiles it requires. )
In USER there is also an APP folder . That contains the <application>.SETTINGS file. The APP can only write to its own SETTINGS file (the OS governs that. no stepping out of bounds. APPS can only write to their own .SETTINGS file. The APP package contains a DEFAULT.SETTINGS. when a user lauches the application for the first time that one is saved to the users space ( again the OS does that, not under control of applications)

so
- USERS\ME\APP\excel.settings
- USERS\MYWIFE\APP\excel.settings
- APP\EXCEL.APP  <- this contains everything required to run excel , including a default.settings file.

The OS and APP folders are READ only for applications. Applications can only read their own .APP file. No peeking in other files or in the OS folder.
Applications must reqister a file extention during install. They can only WRITE to their registered file extensions. they can read any other data file in \users , so they can always import data from other applications, but they can only muck up their OWN data files.
How do you produce new applications in such an environment ("No peeking in other files")?
simple. you have a development tool.

To create a new program you create a new project package.
Before you can write a single line of code you need to give your tool a name and a file extension. That will be the only kind of files you will be allowed to write to store the work being done.
next you will select what files you want to be able to read from the pool of known file formats.
next you will have to define the structure of the file you want to write. all data should be stored in HUMAN READABLE format. NO BINARY blobs. the data file format can be some sort of json / xml type. To save disc space it is ok to store the data on disc as a compressed format of the actual xml data. the compression/decompression is part of the file system and operating system.

This file format descriptor gets published with your application and becomes part of the operating system services. Other programs can call the operating system and say 'i want to read a file of 'this' type. The operating system checks if it has a descriptor for 'this' . if it does it returns the scheme. the application can now use the scheme to access the data in that file. it can NOT write such files. only read them ! There is rarely any need to write other file formats anyway. Why would 'excel' bother to write 'word' files ? word can read' excel' files , transform them and use the data to visualize in a text document as a table. Nothing prevents that. word calls the os and say i need data from an excel file : give me the file layout scheme so i can parse it.

as long as we are developing you can modify the format descriptor of course.

Now you can start writing code. youre program runs sandboxed and can write its own datatypes. the operating system handles a lot of base functions , like file access , windowing, user interface , and other API's.
code development is no different than in any other operating system.

The only difference is that you first need to tell the OS your data format and associate it with your 'to be created' program. after that : everything is the same.

Another advantage would be : no inconsistencies in user interfaces anymore. File selection , buttons, scroll lists : all are handled by the operating system. The user has no need to create yet another file browser or color scheme. The OS deals with that. if the user changes the color scheme on the os then all applications fall into place. if an OS developer creates a new file-open browser then all applications automatically use that.

every part of the OS is modular , has an input and output pipe that conforms to a defined scheme. the guts can change. but the io is defined. if i decide to write a new file-open UI i can do that. i can replace the existing one but i have to conform to the scheme for the file-open handler.

For example: the OS UI-element 'fileopener' takes as arguments 'application_handle' and returns a pointer to the data.
fileopener itself is responsible for the visualisation part. It uses calls to buttons, listboxes and other things which in themselves are part of the OS. An app can not change color or font of a textbox. Those are defined in a settings file.

To pull a list of files the UI application calls a windowless fileopener (part of the core os)  and presents the  application_handle. the OS uses this handle to determine who is trying to open a file and , based on that, will grant read or read/write permissions. and decide what it will let you see as 'openable' files. (the application_handle allows the operating system to read the application descriptor which tells the os what file types can be read, files that are 'unreadable' are simply not shown)
Using operating system calls the directory structure can be retrieved and visualized/navigated. when a file is selected : the os retains a list of which app has what file open in what mode. is a write occurs to a file that is open the OS can signal a data change to the apps that have it open in 'read'.
The operating system now returns a handle to the requested file. subsequent accesses are done through that handle. file-io calls are not like what we have now where you read blocks of data. file-io is in the style of 'get me the data under key this-n-that' . A file is essentially a database with keys and data fields that follow a defined scheme. to retrieve the contents you tell the operating system 'from this handle, get me the contents of this key', or write ti this handle this key with that data. apps do not have raw access to persistent storage. everything is a human readable database following a format (and the format needs to be declared with the application)

Think of a CAD program. no more closed cad formats. the format is known and declared with the application.

An operating system should be a common set of tools an application can use. all IO to hardware is shielded. applications simply read/write data. where it comes from .. no clue. Unix already has this concept of 'data as streams'. i push it to 'data as databases with a defined format'






Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6285
  • Country: fi
    • My home page and email address
Re: Gnu/Linux Considered Harmful
« Reply #245 on: November 26, 2018, 10:05:30 pm »
It is an OS problem because doing that is incredibly complicated.
I disagree. You don't need to replace the entire OS, because all that needs changing is the desktop environment.

The OS architecture does not matter, because the low-level libraries (the standard C library, or the standard C++ library, various libraries the applications need, and the DE widget toolkit) are the ones that communicate with the OS kernel and the desktop environment.

Extending the problem to encompass an entire OS just makes for a lot of additional work.  With nobody interested in funding the development.

I think the invention of C/Unix were far more harmful to IT than beneficial.
You think that way, because you have no idea where C and Unix are really used.  The amount of research alone done on Unix or Unix-like systems is staggering.  Since 2017, all 500 of world's most powerful supercomputers -- those used to do the best weather forecasts, for example -- run Linux. Before that, they ran some variety of Unix.  Any others, including Windows, have been very short-term blips on the list.

Almost all programming languages today use the standard C library. (Typically, their own standard libraries are written in C, or sometimes C++.)

Do you understand that if there was no C, we'd have Fortran, Pascal, some varieties of BASIC, Forth, some Lisp varieties?  Majority of the software used today would not exist.  And you call that harmful.  Well, I definitely disagree.

That is not to say I wish we'd have something better than C to replace it.  I often doodle around and try to fathom what features such a language would have.  But it is far from obvious what that would/could/should be.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6285
  • Country: fi
    • My home page and email address
Re: Gnu/Linux Considered Harmful
« Reply #246 on: November 26, 2018, 10:14:23 pm »
I like to think the world would be different if they delivered ARX.
Me too.

I wonder that about quite a few points along the IT history, actually.  What if NSA hadn't blocked proper security for TCP?  What if US had not allowed unsolicited email advertisements?  What if Microsoft had never achieved a near-monopoly on the desktop computer arena, and we had several competing OSes there?

Yet, none of that is proof of what we have now is inferior (or superior, for that matter).  We humans often think of things in simpler terms than they really are, and simply cannot perceive of all the interconnections and causal relationships, so we tend to think that the alternatives we think about would be better than what we have now.  All we can do, is do, and see if we can do better.  If we can fund the development, that is.
 
The following users thanked this post: Siwastaja

Offline FrankT

  • Regular Contributor
  • *
  • Posts: 176
  • Country: au
Re: Gnu/Linux Considered Harmful
« Reply #247 on: November 26, 2018, 10:37:52 pm »
What if Microsoft had never achieved a near-monopoly on the desktop computer arena, and we had several competing OSes there?

I think that would be a nightmare.  We already have PC vs Mac vs Linux.  Anyone* who buys one box expects any software to run on it.

More OSes?  Maybe that would mean jobs for developers.  But given how poorly some software is supported I doubt that.

Sorry - pushing the conversion OT.




* I was going to say "Anyone (non-technical)" but it's the linux users that seem to be the most vocal, demanding linux ports
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6723
  • Country: nl
Re: Gnu/Linux Considered Harmful
« Reply #248 on: November 26, 2018, 10:58:53 pm »
Since 2017, all 500 of world's most powerful supercomputers -- those used to do the best weather forecasts, for example -- run Linux.
It's open source and thus an easy housekeeping OS to fit all the bespoke code into, also cheap to find developers for. That would have been likely true for some other open source OS.
Quote
Almost all programming languages today use the standard C library.
Which makes for a ton of fun when there was a buffer overflow in it.
Quote
Typically, their own standard libraries are written in C, or sometimes C++.
Yes it metastasized to the point that basically everything is broken.
Quote
Do you understand that if there was no C, we'd have Fortran, Pascal, some varieties of BASIC, Forth, some Lisp varieties?
And Smalltalk and whatever else would have evolved.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23034
  • Country: gb
Re: Gnu/Linux Considered Harmful
« Reply #249 on: November 26, 2018, 11:15:44 pm »
Probably get some groans here but I rather liked pascal. It was mega explicit.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf