Author Topic: Altium REJECTS takeover bid from Autodesk  (Read 54130 times)

0 Members and 1 Guest are viewing this topic.

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11536
  • Country: ch
Re: Altium REJECTS takeover bid from Autodesk
« Reply #200 on: March 26, 2022, 01:28:23 pm »
Something along that idea exists for a long time already (decades!). It is not called a browser but a graphical terminal. And it is used on a large scale even today although the name terminal has been replaced by 'thin client'. These are typically used where people need some custom application for data entry / reading and a glorified typewriter.

Additionally Google and Microsoft (AFAIK) also provide webbrowser-based document editing & sharing.
Google, Microsoft, and Apple (and numerous others) all have web based office software suites.

The problem is that they don’t perform quite as well as desktop apps, in particular with proper OS integration (like file association).
 

Offline MadScientist

  • Frequent Contributor
  • **
  • Posts: 439
  • Country: 00
Re: Altium REJECTS takeover bid from Autodesk
« Reply #201 on: March 26, 2022, 01:58:40 pm »
You don't need a bazillion functions.
file access (open close read write create delete file/directory , select source (local/remote doesn't matter. its unified . everything accessed through GUID)
Give me window to draw on. you cannot step outside this window. base functions to draw lines and whatnot.
Programs at startup get a chunk of memory and can request/release more if needed. They cannot step outside their chunk.
You don’t have the foggiest notion what operating systems actually do, then. If our operating systems did only what you propose (ignoring the fact that there are many, many ways to do those same things), we’d be going back to the operating systems of the 1960s-70s.

Networking as we know it? Gone. Internet? Gone. Drag and drop? Gone. A desktop? Gone. Standardized UI elements? Gone. Printing anything but monospace text? Gone. Sound? Gone. Inexpensive software? Gone. Peripheral support? Gone. Those things all work BECAUSE we have these “fat” operating systems running on top of the kernel, which is basically all you’ve proposed. Those fat layers make it possible to develop software quickly (and thus cheaply), have compatibility between programs, and have greater software stability (and usability) because developers aren’t wasting their time reinventing the wheel over and over again.

So yes, our OSes have a “bazillion” functions and that’s a decidedly good thing!!!!
not true. anything you have described are applications.
- networking ? nothing but remote data access. data that resides in remote storage, is read , parsed and visualised or stored on local storage. Hey operating system , fetch me index.html at www.google.com
- internet ? see above. nothing but remote file reading
- drag and drop ? nothing but an application that tells the os to grab file x from location y and move it to location z. drag and drop for other objects ? window manager telling application object x has moved from location y to location z. do something.
- desktop ? no difference.
- standardized ui elements : part of the os window manager.

you are not sacrificing anything.
the problem is that today we have multiple of these 'standard' systems and they are all incompatible with each other !
it would be great if we could make applications that could completely run inside a browser. then the browser becomes the OS layer. at least that world is (sort of) standardised. doesn't matter if you us chrome , safari , or edge. the site works.

That is what i am talking about. i can pick the os with the look and feel i want and can run any application. applications are not tied to one OS.

You are either deliberately mis representing things or simply actually haven’t a clue how modern OS and GUIs work. Drag and drop is not about files that’s merely one aspect of it. Internet access is not about just about files

You are presenting a 1980s argument in 2022 , OS functionality is there to present abstractions for data handling , interprocess/inter task management and communications , data object interchange , multi media interchange etc etc etc. The alternative is each application has to build all this stuff itself. ( which was happened in the 80s) and inevitability there is then no ability to share anything

However an modern OS designer has to pick a particular method of implementing all this functionality and as a result we have different ways of achieving that end.

Whinging about “ why can’t I do this “ is just nonsense. We are where we are because the PC market has competition ( however flawed ) and in a competitive environment you will always have someone saying “ look here I have a better way to do it “
EE's: We use silicon to make things  smaller!
 
The following users thanked this post: tooki

Offline free_electron

  • Super Contributor
  • ***
  • Posts: 8517
  • Country: us
    • SiliconValleyGarage
Re: Altium REJECTS takeover bid from Autodesk
« Reply #202 on: March 26, 2022, 02:27:57 pm »
You are either deliberately mis representing things or simply actually haven’t a clue how modern OS and GUIs work. Drag and drop is not about files that’s merely one aspect of it. Internet access is not about just about files
so then tell me : what is missing ?
You keep on sticking to existing methods and stuff. raze it !.
Make everything a storage medium that stores files. Every file has privileges. Every file can be accessed using a global indexing system.
no more need for complex systems. it's all transported using one method.

web server ? don't need them. simply access index.html on volume www.google.com .  A web server becomes nothing more than a remote file store. the URL type tells you what it is.
your own local harddisk is drive_c@.mycomputer or drive_d@mycomputer ( or something along those lines. ). A computer that is network connected gets a sort of DNS entry and can be connected to. ( there would be local hostnames too of course similar to 127.0.0.1 . essentially .mycomputer is "home")

www.google.com -> remote web server. a web browser will read index.html , parse and do the rest.
<anything>.mycomputer -> local stuff . drive_c.mycomputer , epson7100.mycomputer . this is already what unix systems do for devices.
<somebody>@mailservice.com -> filestore for email purposes.
remote applications run as a service that transport packets of data ( a packet is nothing but a small file if you think about it) , parse process and respond with other files.


The network stack could be much simpler now. no more need for all those different protocols. it's a simple file transport . email ? same thing. an email is nothing but a small file that gets sent and received to a target. a storage bin somewhere with a public alias. me@mailbox.com is nothing but a remote drive that holds the files until you pick em up. if i send an email to me@mailbox.com all i do is write a new "file" to that storage bin. that file can have a name like <sender>_<timestamp>_guid.mail.

everything is a simple file and file read / write operation at "global level".
On your hardware there is a driver that lets the operating system read/write files over network and store them on local volumes.
you would need minimal firmware to do this. you don't need random file access : bulk transport. 99% of cases covered. remote databases would use a soap like mechanism to access. commands and control files and data bundel files coming back. it doesn't stop anything.
you'd have a lightweight networking and storage kernel.

interprocess communication ? concept of a file in ram.
the actual operating memory would also be like a file. when a program is loaded from harddisk a single file ( the container for the program ) is read into a memory file. think of the computer memory as a huge ramdrive. programs that are allocating memory are doing nothing but creating a larger file. runtime memory is a ramdrive. run out of ram : swap it to disk. one program has one runtime file. can't step out of boundaries. the operating system task is to allocate , manage and abstract the physical storage into a single file.

everyting else are applications that do stuff with the contents of the files.

The windowing manager provides a common set of UI elements and a drawing canvas. need anything not in the default ui element stack ? you can make your own. there are drawing commands and event messages.

[cuote] in a competitive environment you will always have someone saying “ look here I have a better way to do it “
[/quote]
absolutely ! i wan the best of the best. but now i need three or more operating systems to satisfy that !. x is the best in class but windows only , y is best in class but mac only. z is best in class but linux only ... with such a system it would not matter anymore. i can run anything.
i simply buy my "base system" (hardware + the base kernel) purely on the speed/memory and peripherals i need. if apple makes a faster box than dell  or lenovo has the peripherals i want i buy that box. the applications will work.
operating systems would purely compete in implementation. the best operating system is the one that can get the work done the fastest with the least amount of resources. The applications would run no matter who built the OS.
You could extend this to hardware to a point. optimize the motherboard and base pack. we use standardised i/o : ethernet and usb. actually, ditch ethernet. do everything over usb. wifi ? usb based adapter. classic ethernet ? usb based adapter. usb again is simplified to do "file" transport. usb is already point to point.

when i mean "file" i mean a packet of data from beginning to end.
there would be one other interface for keyboard/mouse and other "slow user input". that works with very small , rapid burst packets. ( single keystroke, mouse move, gamecontroller move ). something like a packet containing 8 bytes or 10 bytes. always fixed layout.
all the rest has intelligence in the adapter.

fun to think about. There have been systems that worked like that...

how do web applications work ? its all html transport.
« Last Edit: March 26, 2022, 03:58:01 pm by free_electron »
Professional Electron Wrangler.
Any comments, or points of view expressed, are my own and not endorsed , induced or compensated by my employer(s).
 

Offline nctnico

  • Super Contributor
  • ***
  • Posts: 26907
  • Country: nl
    • NCT Developments
Re: Altium REJECTS takeover bid from Autodesk
« Reply #203 on: March 26, 2022, 05:52:13 pm »
The network stack could be much simpler now. no more need for all those different protocols. it's a simple file transport . email ? same thing. an email is nothing but a small file that gets sent and received to a target. a storage bin somewhere with a public alias. me@mailbox.com is nothing but a remote drive that holds the files until you pick em up. if i send an email to me@mailbox.com all i do is write a new "file" to that storage bin. that file can have a name like <sender>_<timestamp>_guid.mail.
No, like others already wrote: your view is way too simplistic. I just picked this snippet as an example to show that your suggestion is not how it is done. Email is typically (*) stored in a database so it can be indexed (and easely transported from one computer to the other when replacing a computer). Storing every single email in a single file is prone to trouble and cumbersome to index. Back in 1995 I used Eudora and that already used a database-ish system with indexes to store email.

* Some email programs do use files but that dates back a few decades when email clients like Pine and Elm where used on Unix systesm. That is when internet was in its infancy and webbrowsers had not been invented yet.

Quote
how do web applications work ? its all html transport.
Not quite. It is http transport but the contents can be anything. Like JSON or XML formatted database records to feed data into an HTML5 or Javascript application running in the client's side browser.
« Last Edit: March 26, 2022, 07:14:27 pm by nctnico »
There are small lies, big lies and then there is what is on the screen of your oscilloscope.
 

Offline MadScientist

  • Frequent Contributor
  • **
  • Posts: 439
  • Country: 00
Re: Altium REJECTS takeover bid from Autodesk
« Reply #204 on: March 26, 2022, 07:43:42 pm »
Sophisticated web applications today are combinations of back end server processing and front end client processing , it’s much more then passing index.html around  :)

Interprocess communications needs queues , locks, semaphores, mutexs, all supplied by the OS.

The whole thing is simply way more complex then you present it , your perspective is DOS from the 70s , where the OS provides extremely primitive support and the application must do everything else.

This leads to bloated apps , vast differences in user interfaces , dont you remember the pre gui applications , everything was different , nothing was interoperable , it’s was a nightmare

EE's: We use silicon to make things  smaller!
 
The following users thanked this post: tooki

Offline tooki

  • Super Contributor
  • ***
  • Posts: 11536
  • Country: ch
Re: Altium REJECTS takeover bid from Autodesk
« Reply #205 on: March 26, 2022, 09:03:21 pm »
You are either deliberately mis representing things or simply actually haven’t a clue how modern OS and GUIs work. Drag and drop is not about files that’s merely one aspect of it. Internet access is not about just about files
Please learn to quote correctly. You’re attributing to me something said by MadScientist.


so then tell  me : what is missing ?
Where to begin… :palm:

No, seriously, you have NO clue how software works, and why your proposed system can’t work. There’s a good reason we abandoned that approach long ago: it’s inefficient, error prone, and simply not up to the task of the things we use computers for these days.

You keep on sticking to existing methods and stuff. raze it !.
Make everything a storage medium that stores files. Every file has privileges. Every file can be accessed using a global indexing system.
That is how some aspects of modern OSes already work and have for decades.

But you clearly haven’t the foggiest notion why a simple “everything is storage” model is not a viable approach. But suffice it to say that it’s not, which is why we use different mechanisms for different things.

Not that I think you understand file handling anyway.

[mountains of drivel]
how do web applications work ? its all html transport.
Oh sweet summer child… 1. It’s not all HTML. 2. It’s not all HTTP transport. (In fact nowadays most of it isn’t HTTP, it’s HTTPS at minimum, but that’s far from the only protocol.)


The complexity in modern OSes is there for a reason: it takes care of the complexity ONCE so that application developers don’t have to reinvent the wheel over and over again. That’s how things used to work, and it sucked.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf