Author Topic: DOS vs. Linux  (Read 24809 times)

0 Members and 1 Guest are viewing this topic.

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
DOS vs. Linux
« on: November 30, 2020, 03:38:03 am »
I learned DOS way back before Windows and picked up on it quickly. Today I find it's still easy to navigate and remember commands. I've also used Linux (or Unix?), however, I find it's extremely confusing.

Linux seems like so many hidden commands. Recently I used DD to image a drive, and it took a very long time to figure out if my drive was connected, which one it was, and kept having command errors.

Also, I find things like updating and/or installing packages to be confusing. Anytime I want to do something, it seems I need to install a package in which case I find the steps online. Once I execute the commands, I never understand what I've done, but know it works.

My feeling and understanding is that Linux is much more powerful than DOS, but I find myself confused over what commands exist, what they do, and of course, things like apt-update....xxxx

Where do the apt-updates come from, and how do I know if I'm installing a virus?
 

Offline bob91343

  • Super Contributor
  • ***
  • Posts: 2675
  • Country: us
Re: DOS vs. Linux
« Reply #1 on: November 30, 2020, 06:57:28 am »
Perhaps it's time to take a short course to straighten out some likely misconceptions.  I am not an expert but when you run into this sort of difficulty it's probably because your understanding is too limited.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #2 on: November 30, 2020, 07:17:40 am »
Linux is FAR more powerful than DOS. This should be obvious when you consider that Linux is a clone of Unix which was what ran on large mainframes in the era when DOS was released for the far less powerful desktop PC. Since Windows really took over when DirectX for Windows 95 started to allow gaming on Windows the development of DOS pretty much froze and it does pretty much what it did in the 80s. In the meantime PCs and other devices have become so powerful that running a full Unix-like multiuser OS is no problem.

I cut my teeth on DOS and used it for many years, well into the Windows era but over the last 5 years or so I've done more and more on Linux and now when I use the command prompt on my Win7 laptop I really miss all the stuff I can do from a Linux terminal.

Apt updates come from repositories that are stored in a text file /etc/apt/sources.list, the default repositories are vetted and safe, although you should exercise caution when adding random repositories you find. That said, viruses in open source software are very rare because being open source it would not take long for someone to find it.

I second the recommendation to read a book or take a course, depending on what works well for you. There are a number of Linux courses on Udemy, and almost all of their courses go on sale frequently for $10-$15 so never pay the inflated full price. Lots of tutorials online too, and random youtube videos, do whatever works for you.
« Last Edit: November 30, 2020, 07:21:38 am by james_s »
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11259
  • Country: us
    • Personal site
Re: DOS vs. Linux
« Reply #3 on: November 30, 2020, 07:38:50 am »
APT specifically installs things from a repository. There is a limited list of people that maintain packages and it is not that easy to sneak in a totally malicious program.

But some programs are legitimately destructive by their nature. Formatting a drive is what drive formatting utility does. This is destructive, but you know ahead of time what you are installing.

It would be virtually impossible to put something virus-like into the repository.

Things get different when you download some random *.dpkg package, as a lot of commercial software is distributed. Then all bets are off, it can be anything and you have to trust the publisher. This is no different when you download a random *.exe installer on windows.

And it is better to understand what the commands do. Some advice you will find on sites like StackOverflow is just bad. Generally you want to find out what package contains the program you need and install it yourself using apt-get. 
« Last Edit: November 30, 2020, 07:41:00 am by ataradov »
Alex
 

Offline Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: DOS vs. Linux
« Reply #4 on: November 30, 2020, 07:58:34 am »
iratus parum formica
 

Online Berni

  • Super Contributor
  • ***
  • Posts: 4955
  • Country: si
Re: DOS vs. Linux
« Reply #5 on: November 30, 2020, 08:02:53 am »
Yep linux is far more powerful but has a different target audience compared to DOS.

DOS (and later on Windows) is designed for the average PC user. Sure back in the DOS days you still needed to know a fair bit about computers to actually install and configure DOS, but it is something that someone determined enough can get trough with the help of a simple manual. It is made to "just work" and the main task of the DOS back then was to run software and that is it. What it could mostly do is file management since that was required to actually get the executable to where they could be run while also providing basic IO to them. And this is what most of PC users wanted. They just need DOS as a tool to be able to run WordStar or there favorite DOS game.

Back then it was expected from the PC user to have to know about computers in order to use them, and DOS went for setting that bar fairly low so that as many people as possible could use it.

Linux on the other hand is a OS made by geeks for geeks. Offering some very powerful functionality at the cost of also being so much more complicated. For someone that is mostly going to just use the OS to manage files and run programs all that complexity is just a hindrance rather than a help. None the less these days Linux is everywhere, so it can be worth it to give it a shot and learn some of it, its much easier to use now.
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11633
  • Country: my
  • reassessing directives...
Re: DOS vs. Linux
« Reply #6 on: November 30, 2020, 08:54:39 am »
Also, I find things like updating and/or installing packages to be confusing. Anytime I want to do something, it seems I need to install a package in which case I find the steps online. Once I execute the commands, I never understand what I've done, but know it works
who want to install packages from command line nowadays? recovery tools are now all in Windows GUI. in M$ Windows, you google, you download and click setup.exe, done! tell me what a Linux command line can do Windows (the GUI) cant? i stuck with AutoCAD just because of this. i got used to type quicker than finding which button on the ribbon bar in newer app such as Inventor or Solid Work etc (no command line) but i guess this is a curse from 1995 era. probably the same reason why people still use Eagle instead of KiCAD/Diptrace/Altium. i suggest you get rid of it while you still can, find an OS that can do everything on a Windows GUI, if you get used to them, things should be much quicker.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline Lindley

  • Regular Contributor
  • *
  • Posts: 195
  • Country: gb
Re: DOS vs. Linux
« Reply #7 on: November 30, 2020, 09:53:09 am »
Have to agree with @bostonman,  its not easy for the beginner or occasional user to understand where all these 'strange' commands come from.

Having used the Rasp Pi, when trying to set up some basic projects, you have to folllow web instructions for all the command line instructions to import things,  but theres no clear and easy way to understand how or why they are needed.

While its probalby easy in a work environment to pick up and discuss all these points from colleagues, the average home user alone can find it difficult to get to going with these details.

Any links to some good tutorials on these points most welcome.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #8 on: November 30, 2020, 10:10:13 am »
Also, I find things like updating and/or installing packages to be confusing. Anytime I want to do something, it seems I need to install a package in which case I find the steps online. Once I execute the commands, I never understand what I've done, but know it works
who want to install packages from command line nowadays?
For Debian-derivatives, I can warmly recommend Synaptic, a GUI-based package manager interface.  It has very good search facilities (allowing you to search keywords in the package name, or package description, or in both).  It is not as powerful as the command-line interface, but for day-to-day tasks, it is very useful.  It also automatically tells you what other packages it marks to be installed if there are dependencies.

I do also use a couple of PPA's, or Personal Package Archives.  These are basically miniature repositories intended for individual software suites or families of applications.  Currently, I have the FreeCAD (stable) and OpenSCAD ones enabled, as the mainline repository versions are older than I like.  I recommend using the sudo apt-add-repository etc. commands shown for each repository to add that repository, run sudo apt update to reload repository package lists, and finally fire up Synaptic and look for the package to install.

The only software I have installed the Windows way -- via an installer instead of a repository and package management -- is Arduino + Teensyduino.  However, they install as the user running them (without superuser privileges), so that's okay.  (My udev rules for microcontroller support are customized, though.)  Although I really don't like the Arduino editor much, and have considered switching to e.g. PlatformIO.  One of the custom scripts I use with Arduino is one that reboots and autodetects Pro Micro clones' serial port (the cheap ATmega32u4's that have the Arduino Leonardo bootloader, and are treated as Arduino Leonardos in the Arduino environment, that you can get for ~ $5 USD apiece off eBay); I like those, but they can be a bit fiddly to program without that utility.  (It interposes avrdude, that's all.)

Like AVE says, the learning curve is steep, and you will suck at the beginning.  Some of that old DOS muscle memory may even be hindering you, because you already have expectations on how things should work, and you're finding out Linux works differently.  It is unfortunate, but I haven't found any way around it.  The sooner you'll decide it's just another tool that you wish to find how it works best for you, the easier it will be.
(I'm not saying you are not already doing that, I'm talking statistically, according to my own experience helping others learn.)

The Debian Reference Card (English PDF; others and Debian documentation at https://www.debian.org/doc/user-manuals) can be useful.

You can find the latest versions of the C library interfaces (sections 2 and 3) and command-line commands (sections 1, 6, and 8) at the Linux man pages project, which is the upstream for the non-application/library-specific man pages.  On the command line, you can use man section command-or-function to display a specific man page, say man 1 bash .  You can omit the section number, in which case man will look it up in the preferred section order, but note that sometimes different sections do have very different pages; for example, man 2 signal shows the signal() C library function interface, but man 7 signal shows the overview of POSIX/Unix signals.
You can do a keyword search using man -k term or man -s section -k term to look up manual pages related to term.

One very useful command in Debian derivatives is is dpkg-query -S $(which command) or equivalently dpkg-query -S /path/to/file .  It will tell you the package name that provided the command-line command or specified file; just remember to supply full absolute path to the file in the latter form.  Then, you can use dpkg-query -L package | less to list all the other files in that package, or dpkg-query -s package to show the description of the package.
If you don't remember those options (I don't!), use man dpkg-query or dpkg-query --help to look them up!

(This is also the reason why my example code almost always contains such an usage information when run with -h or --help as the first command line parameter.  I have a directory tree full of examples, and instead of checking the source code to see if a particular one is the one I'm looking for, I just run the binary with the --help flag, to see if it does what I'm looking for.  If it does, then I go looking at its sources.  Much faster this way.)



One thing I wish new Linuxers would consider and wield properly, is the privilege separation: users, groups (including supplementary groups), and filesystem capabilities.  If you ever find yourself doing sudo su - or chmod 0777 /dev/foo, you're doing things seriously wrong.

I have installed quite a few Apache servers in large organizations over the past two and a half decades, with multiple partially overlapping administrators (really, a proper admin hierarchy), without anyone needing to use sudo , with file user ownership indicating creator user, and access managed through group memberships (and a couple of helper scripts).  So I do claim I know this stuff.

On single-user workstations it does not matter much, except that it opens security holes that may or may not matter, but getting it right gives you a powerful new set of tools (that again, on a single-user workstation may not be of much use).  However, even a rough understanding the privilege separation mechanism gives you a starting point to solve problems instead of getting completely frustrated.  In short, that every process has a specific user and a group, plus optionally a set of supplementary groups (that are essentially equivalent to the process group); and optionally a set of Linux capabilities, with the superuser (root) having all capabilities; and files and devices are owned by a specific user and group; that udev sets those (and the access mode associated) for devices; and that executable binaries, but not scripts, can be set to grant special privileges (capabilities), much like the SetUID and SetGID bits can grant the process executing the binary superuser privileges, except much more fine-grained.

I wish there was a good guide I could point you to, but thus far, I haven't found one.
I have considered trying to write my own, several times, but the best format I can think of is a Wikipedia-like interlinked snippet archive, but it is too large an undertaking for myself alone, and there are very few people I'd trust to add correct advice...  and methinks providing misleading advice is much worse than being silent.
 
The following users thanked this post: Mechatrommer

Online tszaboo

  • Super Contributor
  • ***
  • Posts: 7387
  • Country: nl
  • Current job: ATEX product design
Re: DOS vs. Linux
« Reply #9 on: November 30, 2020, 11:29:42 am »
Well, on linux, if you press tab twice it lists out all the available commands.
And if you want more info on a command, you can type man <command> that should tell you what it does.
And yes, the operating system is a confusing mess the first, second and hundredth time you try to use it. Especially, that you have to interact not only with a bunch of built in commands, but distribution depended cli applications and setup files. The documentation of those is sometimes terrible, and the usual way of solving any problem on a linux computer is having a second computer where you goolge error messages.
So unless you really need to use Linux, why would you? There is the powershell on windows, all the commands you learned should be working there, it is familiar, and it is extended to have much more capabilities than the old cmd.exe. Give it a try.
 

Offline MIS42N

  • Frequent Contributor
  • **
  • Posts: 511
  • Country: au
Re: DOS vs. Linux
« Reply #10 on: November 30, 2020, 11:52:09 am »
There is no DOS vs. Linux, it's unfair to try and compare the two. It's like comparing a bicycle and a truck. They both have wheels, go on roads, but there are too many differences for meaningful comparison.

I have been running a Linux server for over 12 years. It has a mail server, web server, time server, DLNA, software RAID, DHCP, Samba and some other things and runs 24/7. I really don't know how it works. Setting up each component took many hours of reading and sorting out errors, and I learned that this (for me) had to be written down step by step because I didn't learn the commands, didn't know what each configuration did, didn't have a good grasp of the security, etc. But if I had to rebuild it from scratch, I think I could, using what I have written. My point is Linux can be used without understanding it. Every now and then it is a total pain because something goes wrong but it has happened to someone else so a bit of research usually finds the answer.

It would be different if Linux was my desktop. Then it would be very desirable to learn commands so I didn't have to refer to my notes for every little thing. I have an older laptop running Puppy Linux, mainly because I wanted a serial terminal to log output from microprocessors. But again I know just enough to do what I want.

As I see it, there are two paths. Treat Linux as a tool and learn the bits you need and treat the rest as a black box, or become a mechanic and learn what is under the hood. I have at times done some in depth learning but without practice it doesn't stick. So I now take the tool approach.

I should mention that Linux is a generic term for a great variety of operating systems that derive from a common root but can be very different. For the uninitiated it is best to stick to one with a huge user base, because most of the answers are out there for that flavour of Linux. Some learning transfers from one Linux to another, but often there is a variation. I came across Puppy Linux because I wanted something simple. It logs in automatically as the root user, which is anathema for a serious user but ideal for a novice like me that keeps nothing sensitive on it and don't want to deal with unnecessary complications.

When it comes to Linux, it is definitely "horses for courses".
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #11 on: November 30, 2020, 04:44:25 pm »
Mostly, we use the 'bash' shell.  Buy a book on using bash  Google 'bash shell book'  There are many
Sometimes we use the Bourne shell for scripting.  Buy a book on using sh
You may even run across the C shell.  Buy a book on using csh
Possibly even the Korn shell.  Buy a book on using ksh

The problem is that these books are not going to hold your hand learning the really simple commands.  So I Googled for 'raspian list of shell commands' - I have one of the new Pi 400 computers on my desk.  I got

https://www.raspberrypi.org/documentation/linux/usage/commands.md

This is a list of the most common commands.

There are hundreds of web sites that discuss shell commands.  You can absolutely forget learning all of them or even a significant percentage.  There are many commands and most have a bunch of options.  You really need a ready resource and I prefer books to web sites.  I'm probably a relic...

When you think you understand the options for 'tar', you don't.

The command line (shell) lays over the operating system and provides various utilities that allow the user to interact with the computer.  It's the same with CP/M, MSDOS and all the other operating systems.  But Unix/Linux takes command line functionality to a whole new level.

Too much typing?  You aren't using the 'alias' feature as much as you should.  You can type 'ls -la' or you can alias a new command 'la' to do the same thing.  Alias is one of my most favorite features.




 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #12 on: November 30, 2020, 06:45:38 pm »
who want to install packages from command line nowadays? recovery tools are now all in Windows GUI. in M$ Windows, you google, you download and click setup.exe, done! tell me what a Linux command line can do Windows (the GUI) cant? i stuck with AutoCAD just because of this. i got used to type quicker than finding which button on the ribbon bar in newer app such as Inventor or Solid Work etc (no command line) but i guess this is a curse from 1995 era. probably the same reason why people still use Eagle instead of KiCAD/Diptrace/Altium. i suggest you get rid of it while you still can, find an OS that can do everything on a Windows GUI, if you get used to them, things should be much quicker.

I do. The command line is so much faster and more efficient, and I can administer every machine in the house just by ssh into the terminal from my laptop. The GUI is easier for someone who is still learning their way around but the command line is superior when you know what you're doing.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #13 on: November 30, 2020, 06:49:18 pm »
On Unix there are no "commands".

"Commands" is a concept used in OS like DOS where everything you type is an applet handled by things like "command.com", which also offers a built-in shell, but it's not even close to the UNIX shell.

On Linux, UNIX and BSD, there are no "commands", there are only programs and two main big concepts: each program has three streams, and streams can be piped.

The three streams are:
  • STDIN, namely input
  • STDOUT, namely output
  • STDERR, namely a second output channel usually use for signaling errors
The output of a program can be used to feed the input of a second program, and/or redirected to/from a file.

Code: [Select]
ls *.php | grep app
(silly example, but here there are to "programs" (ls, grep) piped: the output of "ls" feeds the input of "grep")

So, what you type on the bash belongs three categories
  • internal bash variables and applets (e.g. "for", "loop", "if" ...), used to create a script
  • operating system's environment, used to operate with scripts and programs
  • the name/s of the program/s to be launched, plus options, pipe/s, and parameters

You can always type "man program_name" to see the full documentation online!
« Last Edit: November 30, 2020, 10:14:37 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #14 on: November 30, 2020, 06:51:42 pm »
Semantics. From the end user standpoint there is no difference between a "command" and a "program". It just needlessly complicates things to even add that distinction for a beginner. Whether typing "ls" is "commanding" the computer to do something, or executing a program that does what the user is wanting to do is completely irrelevant.
 
The following users thanked this post: newbrain, Masa

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2217
  • Country: 00
Re: DOS vs. Linux
« Reply #15 on: November 30, 2020, 07:18:19 pm »
Linux on the other hand is a OS made by geeks for geeks.

Linux is made by engineers, for engineers. It's true that it can attract geeks, but most users of Linux are engineers.
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: DOS vs. Linux
« Reply #16 on: November 30, 2020, 08:54:11 pm »
bostonman:
Where did the comparison between (MS-?)DOS and Linux(-based distribution) come from? Those operating system families has nothing to do with each other and are dissimilar on nearly any possible level. Perhaps this is where your confusion comes from?

Forget about (MS-?)DOS completely if you want to learn a Linux distro, some BSD or any other modern operating system. It’s like trying to learn drive a train engine by comparing it to a crank-started Ford Model T from 1910s. ;)

If you do not feel comfortable with using shell directly, you may try some distribution that is designed to be more friendly towards people not wishing to learn by being thrown at the deep water with sharks around. This way you may gradually learn new things. Some will argue your progress will be slowed down and that is true, but I find it better to learn things in 2 years than giving up after a day of frustration and not learning at all.

You may also learn by managing some web hosting through SSH. That protects you from making Catastrophic Mistakes™, while you are still getting accustomed to using shell on daily basis. While there is little use for shell on Windows unless you are an admin or a programmer, you may use bash and nowadays Windows even has its own PowerShell.(1)

As for software packages: see them as items in app stores, usually with an option to add other, custom stores (if you trust them). Instead of giving random people full, uncontrolled access to your computer, you ask a package manager to install stuff for you. How packages can affect your system is relatively well defined and mostly reversible from the very same tool.

A side note: avoid dd. Not only it has zero benefits for most operations, but its behavior is misunderstood and leads to unexpected outcomes. It’s also too easy to misuse, which led to its alternative name: disk destroyer. cat and head are more than enough, possibly supplemented with tee.

Why your dd command has failed is hard to tell, because I have no details. Most likely cause is the same as always, in any system: you made a wrong choice. Depending on the environment it may be wrong tools or options, or it may be an error made by an author of the program you use — in which case your mistake is trusting them ;). There is correlation between the platform used, but do not overestimate it. I have seen a ton of destruction by people using tools improperly on Windows, as well as horrible programs written for Linux. There even are whole distributions that make more advanced users tear hair from their heads.

____
(1) Though idea behind it is a bit different from that of shells on Unix-ish systems.
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #17 on: November 30, 2020, 08:58:45 pm »
DOS and Linux command line are not THAT different on the surface, more like comparing a moped to a customized racing motorcycle. All of the common commands in DOS have Unix equivalents that do roughly the same thing. Most of the typical stuff like copying/moving/renaming files, launching executables, creating text files, installing software, etc can be done from either one using similar commands. Copy = cp, move = move, md = mkdir, rd = rmdir, cd = cd, etc.
 

Offline grumpydoc

  • Super Contributor
  • ***
  • Posts: 2905
  • Country: gb
Re: DOS vs. Linux
« Reply #18 on: November 30, 2020, 09:00:49 pm »
I confess I am wondering why we are talking about two dinosaurs - DOS vs Linux shells - in 2021? Didn't someone invent the GUI already - I hear even Linux systems have them these days  >:D
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #19 on: November 30, 2020, 09:21:21 pm »
I confess I am wondering why we are talking about two dinosaurs - DOS vs Linux shells - in 2021? Didn't someone invent the GUI already - I hear even Linux systems have them these days  >:D

Of course they do, but the command line is still king. Once you know your way around it is MUCH faster to work in the command line. Many of my Linux systems are headless, there is no monitor or mouse connected to them and they are acessed remotely via ssh. Yes I could use VNC and have a graphical desktop but why? What benefit does that bring other than being slower, more tedious to use, and requiring more effort to set up? When I want to move files around and do other stuff of that nature I use the terminal, even on my Mac. It's probably an order of magnitude faster than opening finder, drilling down to the file I want, opening another finder window, drilling down to the place I want it, dragging the file over, then renaming it to what I want it called, or whatever else I'm trying to do. Instead I can type out one single command line that does it all.
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11633
  • Country: my
  • reassessing directives...
Re: DOS vs. Linux
« Reply #20 on: November 30, 2020, 09:27:25 pm »
Linux on the other hand is a OS made by geeks for geeks.
Linux is made by engineers, for engineers. It's true that it can attract geeks, but most users of Linux are engineers.
what is your basis of saying that?

i checked pcb market... https://smtnet.com/news/index.cfm?fuseaction=view_news&news_id=9591&company_id=53231
i checked cad market... https://www.cadalyst.com/cad/pandemic-transforms-cad-industry-2020-and-beyond-76189

i dont see any KiCAD nor Linux specific/specialized EDA/CAD taking a big chunk in the stats.. all point to softwares in Windows. are you saying..

1) engineers using Wine/VMware in Linux just to run few professional and expensive Windows softwares?
2) engineers/scientists that work from home, government or non profitable agencies? that never see the light in those statistics?

or am i missing something? show me some facts please.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #21 on: November 30, 2020, 09:30:21 pm »
It depends on your definition of "engineer". The FPGA toolchains at least for Altera and Xilinx are native *nix with Windows ports. Most of the big tech companies other than Microsoft use Linux internally, companies like Google employ thousands of software engineers who develop on Linux. "Geek" and "engineer" have a great deal of overlap.
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11633
  • Country: my
  • reassessing directives...
Re: DOS vs. Linux
« Reply #22 on: November 30, 2020, 09:36:04 pm »
and how many software engineers vs electrical/mechanical/civil/aeronautics/marine/chemistry/etc/etc/etc engineers in numbers out there?

Of course they do, but the command line is still king.
did you read my AutoCAD story earlier? ;) we are both stuck. but i believe i'm not so serious as you are.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #23 on: November 30, 2020, 09:39:58 pm »
and how many software engineers vs electrical/mechanical/civil/aeronautics/marine/chemistry/etc/etc/etc engineers in numbers out there?

No idea, but I'm sure the data is out there.

Personally I know more software engineers than all other engineering disciplines combined, but I live in close proximity to Microsoft, Amazon, Google, and countless smaller tech campuses.
 

Offline grumpydoc

  • Super Contributor
  • ***
  • Posts: 2905
  • Country: gb
Re: DOS vs. Linux
« Reply #24 on: November 30, 2020, 10:16:50 pm »
I confess I am wondering why we are talking about two dinosaurs - DOS vs Linux shells - in 2021? Didn't someone invent the GUI already - I hear even Linux systems have them these days  >:D

Of course they do, but the command line is still king. Once you know your way around it is MUCH faster to work in the command line. Many of my Linux systems are headless, there is no monitor or mouse connected to them and they are acessed remotely via ssh. Yes I could use VNC and have a graphical desktop but why? What benefit does that bring other than being slower, more tedious to use, and requiring more effort to set up? When I want to move files around and do other stuff of that nature I use the terminal, even on my Mac. It's probably an order of magnitude faster than opening finder, drilling down to the file I want, opening another finder window, drilling down to the place I want it, dragging the file over, then renaming it to what I want it called, or whatever else I'm trying to do. Instead I can type out one single command line that does it all.

Oh, don't get me wrong - I quite agree and, having been using Unix systems since the 1980's, spend a lot of my time typing at bash although I often do prefer to use VNC to remote systems for one reason or another,  not infrequently that it is useful to be able to run a browser on them - eg the headless Centos box (a PC-Engines APU) that I use as a router has the only physical Ethernet connection to the modem (for pppoe), the modem in turn offers no command line being one of the very few bits of network kit that is not running OpenWRT but it does have a web interface - so the only way I can access that is to have VNC on the router.

But the new-user introduction to Linux, today, shouldn't really need the command line at all; unless bostonman still boots his 10th gen i9 into DOS, or just uses Windows as an way to have more than one DOS command shell open at once, of course :)
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #25 on: November 30, 2020, 11:09:55 pm »
A side note: avoid dd. Not only it has zero benefits for most operations, but its behavior is misunderstood and leads to unexpected outcomes. It’s also too easy to misuse, which led to its alternative name: disk destroyer. cat and head are more than enough, possibly supplemented with tee.


The IBM 1130, for which I have an FPGA version, uses 321 16-bit words per sector times some number of sectors per track times some number of tracks.  I want to map that to a Compact Flash with 512 bytes per sector.

I have a complete disk image in 1130 format built by a Makefile running at the command line.  Now I want to reconfigure the image and write it to the Compact Flash using two 512-byte (thus 512 16-bit words) per logical sector and zero padding the 2d physical sector.

'dd' is the entire reason I bothered to learn about Linux way back a long time ago.  It just turns out to be one of the more important utilities available - to me!  Yes, things can go wrong in an instant.  It doesn't check your choice of /dev/sd*, it just does what it is told.

But when you can run a command line cross-assembler, use 'make' and a talented Makefile along with a bit of C code to reconfigure the sectors and the 'dd' utility to write the Compact Flash, nothing else comes close.  One command builds and transfers EVERYTHING.  By all means, learn about Makefiles!

Let's face it, Linux is on less than 2% of desktops (it dominates the server market).  After 20 years and a compelling price, they can't even give the thing away.  It is simply too command line oriented.  Too many things can NOT be set up in a GUI and you wind up spending a lot of time typing commands.  In fact, only fairly high level applications run at the GUI level.  Digilent Waveforms or the Arduino IDE are examples as is Eclipse.  You can not compile and run a C program from the GUI unless you have an IDE of some type.  Awkward if all you want is the output.  Maybe write the code with 'geany' and compile it with the 'cc' compiler and run it.  Here come the numbers!

The command line is king.  If you don't want to use it, just run Windows.  Even with PowerShell, Windows doesn't come close to mirroring *nix.  And the Bash shell on Win 10?  Try to print a file or exchange a file with the Windows filesystem.  It's a mess!  I have it on all my Win 10 boxes and never use it.

It takes a long time to learn the Linux commands.  Just do an 'ls' on /bin, /sbin, /usr/bin, /usr/sbin and see how many utilities there are.  Hundreds!  And there'll be a test later!

« Last Edit: November 30, 2020, 11:15:24 pm by rstofer »
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #26 on: November 30, 2020, 11:57:34 pm »
Unless you're hoping to get a job as a Linux/Unix administrator you don't need to learn all of the commands. 99% of normal use involves maybe 5-10 commands that get used frequently, then there is a lot of other stuff that is used internally by the system, or that get used for various edge cases. If you don't know how to accomplish a specific task that's when you can look for the command to do it.
 

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
Re: DOS vs. Linux
« Reply #27 on: December 01, 2020, 01:35:54 am »
As predicted, I got lots of (and more than expected) input.

Just to clarify on a few points. I'm not "comparing" Linux to DOS. My question was based on them both being similar since they run command lines and have option (?) such as a forward slash and a letter after a command.

The need to learn and/or use Linux is based off of periodically using Ubuntu and recently a Raspberry Pi. With the Raspberry Pi (as someone pointed out), I needed to read a list of steps to set it up, however, I understood very little. The need for using DD came from a failing hard drive where a GUI wasn't working, so most recommended DD. To be honest, I thought it was great advice because it worked well, but I didn't understand anything I was doing, just read steps.

My question about Linux somewhat arose from using DD. While DD was highly recommended, I got suggestions about DD variations, along with using different commands.

In DOS, if you want to 'copy' something, you use 'copy', but it seems in Linux a plethora of commands exist to do the same thing. My confusion comes from how do I know which commands exist, how to determine the best one, and just plan confusion over (to use as an example) the apt-get blah blah

When I set up the Raspberry Pi, I attempted to surf the Internet and couldn't play video. After researching, I ended up typing a list of apt-get and various other commands that spewed out what seemed like 1000s of lines of text updates. In some cases I'd see something that read failed (or a variation of that word - I forgot now), but then it would continue and I'd need to guess whether after 1000s of lines were executed, that the apt installed correctly.

Does anyone have a suggestion on a book? As someone pointed out, they, and myself, prefer a book over searching the net. A nice book with a list of commands and stuff would be nice, but I'm sure this is a loaded question.
 

Offline DimitriP

  • Super Contributor
  • ***
  • Posts: 1307
  • Country: us
  • "Best practices" are best not practiced.© Dimitri
Re: DOS vs. Linux
« Reply #28 on: December 01, 2020, 02:30:29 am »
Quote
While DD was highly recommended, I got suggestions about DD variations, along with using different commands.
This is another case of "I was told to do X"
So here I am, How can I do X?
"people" are always ready to throw out  "advice".

You have a splinter?
Use a small sharp knife to cut the skin open . <inserts amazon link to sharp knife>

I most cases especially with rPi, it's used as a stepping stone to accomplish something else.
Chefs usually cook in their  kitchen, they don't install the electrical panel for the oven, after they operated a forklift to move it from the delivery truckinto the building.

So.......what I'm trying to say , if you wanna learn enough linux to get stuff done, do.
But if you need to "use it" for something else, don't. Get someone else to configure it for you.
The problem usually is not basic commands, but understanding the concept behind it.
DOS, doesn't have a "dd" command. You are not "copying a file" a'ala copy myfile.txt myfile2.txt
Without  understanding why dd is being used it all goes south

And no matter which "variation" was suggested, every one of the options is documented.
"man dd" will show you every option available, but I have no idea if man pages get installed on rPi distributions

...and just to be clear...when having a question it's better to come out and ask it, "after some googling" so no one like me comes back with "is your google broken?" , instead of starting with a vague comparison of DOS vs linux where the only thigng they have "in common" is the "prompt where you type stuff in"  which too many people refer to it as ........DOS....
   If three 100  Ohm resistors are connected in parallel, and in series with a 200 Ohm resistor, how many resistors do you have? 
 

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
Re: DOS vs. Linux
« Reply #29 on: December 01, 2020, 03:30:51 am »
Quote
...and just to be clear...when having a question it's better to come out and ask it, "after some googling"

I believe my question was clearly asked, however, I asked a question regarding confusion, so maybe it wasn't clear enough for you (and possibly others).

My question was somewhat simple: DOS vs. Linux and basically why is DOS easy to navigate while Linux seems to have an endless (and confusing) number of command options.

My last sentence in my previous post did inquire about whether anyone know a good reference book I can buy to have as a reference to avoid having to ask (or Google).
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #30 on: December 01, 2020, 04:23:52 am »
Unless you're hoping to get a job as a Linux/Unix administrator you don't need to learn all of the commands. 99% of normal use involves maybe 5-10 commands that get used frequently, then there is a lot of other stuff that is used internally by the system, or that get used for various edge cases. If you don't know how to accomplish a specific task that's when you can look for the command to do it.

Since I use Linux for 99% of everything, I just used it to check my most common commands used:

Code: [Select]
$ awk '{print $1}' ~/.bash_history | sort | uniq -c | sort -n -k1 -r

As I have removing duplicates turned on, this only counts different variations of the same command, rather than the number of times actually used.

The (slightly redacted) results:

Code: [Select]
    659 cd
    600 git
    564 l
    489 sudo
    419 riscv64-unknown-elf-gcc
    388 ssh
    386 gcc
    344 man
    332 scp
    280 riscv64-unknown-elf-objdump
    246 cat
    242 rm
    219 less
    210 time
    192 arm-linux-gnueabihf-gcc
    159 clang
    144 find
    141 make
    136 mv
    120 echo
    113 mkdir
    104 emacs
     92 spike
     86 cal
     82 du
     81 objdump
     73 file
     72 ls
     70 cp
     69 top
     69 for
     65 arm-linux-gnueabihf-objdump
     63 pushd
     62 grep
     50 perl
     48 qemu-riscv64
     48 df
     41 which
     41 tar
     39 ifconfig
     38 riscv64-unknown-elf-ld
     37 riscv64-unknown-elf-as
     36 riscv64-unknown-elf-objcopy
     34 avr-gcc
     32 riscv32-unknown-elf-gcc
     31 qemu-riscv32
     30 llc
     30 ./configure
     28 export
     26 uptime
     26 cmake
     23 kill
     22 aarch64-linux-gnu-gcc
     21 seq
     21 dos2unix
     20 size
     20 /home/bruce/software/arduino-1.8.10/hardware/tools/avr/bin/avrdude
     19 ping
     18 cut
     18 avr-objdump
     16 tail
     16 python
     16 opt
     15 perf
     15 minicom
     15 lsblk
     14 uname
     14 popd
     14 aarch64-linux-gnu-objdump
     13 while
     12 zcat
     12 rmdir
     12 ln
     11 sort
     11 ../configure
     10 unzip
     10 mtr
     10 killall
     10 gunzip
     10 apt
      9 wc
      9 nslookup
      9 lsb_release
      9 chmod
      8 diff
      7 wget
      7 set
      7 pgrep
      7 awk
      7 apt-get
      6 sync
      6 asciidoc
      5 stty
      5 strings
      5 riscv64-unknown-linux-gnu-gcc
      5 curl
      5 crontab
      4 vi
      4 pwd
      4 lsusb
      4 date
      4 cmp
      3 unix2dos
      3 touch
      3 shasum
      3 screen
      2 su
      2 strip
 
The following users thanked this post: Ed.Kloonk

Offline DimitriP

  • Super Contributor
  • ***
  • Posts: 1307
  • Country: us
  • "Best practices" are best not practiced.© Dimitri
Re: DOS vs. Linux
« Reply #31 on: December 01, 2020, 04:45:54 am »
Quote
...and just to be clear...when having a question it's better to come out and ask it, "after some googling"

I believe my question was clearly asked, however, I asked a question regarding confusion, so maybe it wasn't clear enough for you (and possibly others).

My question was somewhat simple: DOS vs. Linux and basically why is DOS easy to navigate while Linux seems to have an endless (and confusing) number of command options.

My last sentence in my previous post did inquire about whether anyone know a good reference book I can buy to have as a reference to avoid having to ask (or Google).


Take a look here :
http://www.yolinux.com/TUTORIALS/unix_for_dos_users.html

and for more specific in-depth info:
https://man7.org/linux/man-pages/dir_section_1.html



   If three 100  Ohm resistors are connected in parallel, and in series with a 200 Ohm resistor, how many resistors do you have? 
 

Offline james_s

  • Super Contributor
  • ***
  • Posts: 21611
  • Country: us
Re: DOS vs. Linux
« Reply #32 on: December 01, 2020, 06:01:26 am »
In DOS, if you want to 'copy' something, you use 'copy', but it seems in Linux a plethora of commands exist to do the same thing. My confusion comes from how do I know which commands exist, how to determine the best one, and just plan confusion over (to use as an example) the apt-get blah blah

Such as?

To copy a file you use 'cp', it works almost identically to the DOS 'copy' command.


apt-get is the package manager interface, it's used to install packages from a repository. It's a capability that DOS doesn't have in the first place so there's no direct equivalent.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #33 on: December 01, 2020, 07:08:46 am »
man pages are installed on Raspbian.  Same as any other Debian distribution  from which it is derived.
« Last Edit: December 01, 2020, 07:29:54 am by rstofer »
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2217
  • Country: 00
Re: DOS vs. Linux
« Reply #34 on: December 01, 2020, 07:37:59 am »
what is your basis of saying that?

i dont see any KiCAD nor Linux specific/specialized EDA/CAD taking a big chunk in the stats.. all point to softwares in Windows. are you saying..

1) engineers using Wine/VMware in Linux just to run few professional and expensive Windows softwares?
2) engineers/scientists that work from home, government or non profitable agencies? that never see the light in those statistics?

or am i missing something? show me some facts please.

The following industry grade EDA software runs natively on Linux: Cadence, Zuken, ADS (Keysight).

For FPGA's: Altera and Xilinx

For MCU's, almost all of them e.g. STMCube/IDE,  Microchip

I don't believe that they would keep offering Linux based software for many years in a row,
if there shouldn't be a demand from the market.

Mech CAD is another story though...
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #35 on: December 01, 2020, 07:47:51 am »
I might just pick a Command Line book at random like:

https://www.amazon.com/Linux-Command-Line-2nd-Introduction/dp/1593279523

The reason that Unix/Linux uses cp instead of copy is simple:  Programmers are lazy and they don't want to type any more than necessary.

Or maybe this excuse:

https://catonmat.net/why-unix-commands-are-short

Well, way back then, the teletype was king and it actually was difficult to type.  There was a time before 'glass teletypes'.  The CRTs didn't really show up much before the early '70s.  I think I got my ADM3 around '75.  I got the Televideo TV 950 a couple of years later.

Yes, graphics terminals were around much earlier but they weren't usually the main console and they were expensive.

cd is a utility and is pretty much the same on all *nix systems.
apt-get is an application and works only with some Linux distributions.  It may not be covered in every book on Linux commands because it isn't really a command in the sense of cp or rm.

When you want to wander through the file system and save some typing, look up the push and pop commands.  If I do something like $ push /etc, I actually change to the /etc directory just as though I used $ cd /etc.  But to get back to where I came from, all I have to do is $ pop

 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #36 on: December 01, 2020, 07:50:56 am »
When you want to wander through the file system and save some typing, look up the push and pop commands.  If I do something like $ push /etc, I actually change to the /etc directory just as though I used $ cd /etc.  But to get back to where I came from, all I have to do is $ pop

Which shell has push/pop rather than the usual pushd/popd?
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #37 on: December 01, 2020, 10:02:49 am »
My question was somewhat simple: DOS vs. Linux and basically why is DOS easy to navigate while Linux seems to have an endless (and confusing) number of command options.
This is because DOS is simple, and Linux distributions complex.

One big difference is due to privilege separation.  DOS has none, Linux has full user-group model and independent capabilities models.
Another is device support.  DOS supports parallel and serial port, floppy drives, hard drives, and a NUL: device (IIRC), but Linux supports a lot of different devices, down to GPIO pins if your computer or device has those.
Third is the kernel configurability and interfaces, i.e. the /proc and /sys pseudo-filesystems providing direct access to the kernel internals.

For example, in DOS, you cannot know what TSR (terminate-stay-resident) programs are currently running.  In Linux, we have the /proc/PID/ pseudo-directories exposing every one, including kernel threads doing userspace tasks.

The comparison is therefore not very fair, because Linux-kerneled operating systems are so much more than DOS ever was.  But that is the core reason for the huge difference in complexity as well.  We'd have to strip most of what is Linux to get down to the simplicity DOS was.  So, why don't we?  Because those "added" features are what makes Linux useful.  It is often not easy to see that, because those features are used internally by the utilities we do use.

Story time:

When I developed a self-replicating USB stick that was used to measure high-performance computing nodes – run and time specific gromacs, vasp, and dalton simulations compiled as static x86-64 binaries with generic optimization –, I started with a very DOS-like environment.  Basically, all I had were GNU coreutils: cat chgrp chmod chown cp date dd df dir echo false ln ls mkdir mknod mktemp mv pwd readlink rm rmdir sleep stty sync touch true uname vdir [ arch b2sum base32 base64 basename chcon cksum comm csplit cut dircolors dirname du env expand expr factor fmt fold groups head hostid id install join link logname md5sum mkfifo nice nl nohup nproc numfmt od paste pathchk pinky pr printenv printf ptx realpath runcon seq sha1sum sha224sum sha256sum sha384sum sha512sum shred shuf sort split stat stdbuf sum tac tail tee test timeout tr truncate tsort tty unexpand uniq unlink users wc who whoami yes md5sum.textutils.
It had a custom kernel with an initial ramdisk (initrd) with all relevant drivers (mostly raid stuff, really) compiled as modules, but not even an init system.
(This was just before systemd.  I'm not even sure you could do this with Debian anymore, or any other systemd-based system.)
I used the venerable SystemV init binary to provide a tty interface (loginless/passwordless bash shell) to be used to examine the USB stick, and to replicate the stick itself via a command if wanted, with /etc/rc (if I recall correctly) running the actual measurement system, automatically shutting down (after some 8 hours of calculations) when done.

If that sort of thing interests you, I recommend taking a look at Linux From Scratch.  It is basically an online book by Gerard Beekmans, Matthew Burgess, Bruce Dubbs, and others, for how to build a Linux system directly from sources.  Not only is it very informative and teaches one a lot about how the userspace part of the operating system (as opposed to the kernel) is put together.  The environment you have at the end of the LFS book is very DOS-like.

Which shell has push/pop rather than the usual pushd/popd?
Anyone who has
    alias push=pushd
    alias pop=popd
in their profile?  After all, you yourself have something like
    alias l='ls -laF --color=auto'
don't you? 8)
Me, I even have alias dir='ls -laF --color=auto' myself.  Some habits die hard, let's not be too judgy!
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #38 on: December 01, 2020, 10:40:43 am »
Which shell has push/pop rather than the usual pushd/popd?
Anyone who has
    alias push=pushd
    alias pop=popd
in their profile?  After all, you yourself have something like
    alias l='ls -laF --color=auto'
don't you? 8)
Me, I even have alias dir='ls -laF --color=auto' myself.  Some habits die hard, let's not be too judgy!

Just alias l="ls -l". Personal aliases are one thing, telling a newbie struggling with Linux to "look up" personal aliases is a little different :-)

Habits .. yes, I had to unlearn "dir" and use "ls", but I think forgetting "pip" and using "cp" took my fingers longer.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #39 on: December 01, 2020, 10:51:55 am »
Personal aliases are one thing, telling a newbie struggling with Linux to "look up" personal aliases is a little different :-)
Very true, but because of muscle memory, it is too easy to forget the Bash built-ins are pushd and popd instead of just push and pop.  (I'd like to give rstofer a little slack here, because I too make that sort of errors regularly.)

It is good to point out the correct commands, though. :-+

Habits .. yes, I had to unlearn "dir" and use "ls", but I think forgetting "pip" and using "cp" took my fingers longer.
pip was from CP/M, wasn't it?  Oops, no, originally from DEC PDP-6!

Also, the file specification strings part of the Wikipedia article on PIP shines interesting light on DOS device naming as well.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #40 on: December 01, 2020, 11:03:09 am »
One big difference is due to privilege separation.  DOS has none, Linux has full user-group model and independent capabilities models.

BeOS v4 and v5 is (somehow) UNIX-like OS, but it never had any user-group model, multi-users and privileges. You have a bash shell, gcc, and a subset of the UNIX programs (dd was banned and not included with the install for some reason, I had to compile it myself), and are always logged as "root".

Insanely great, ain't it?

I loved it for two years; my first computer was a home-computer only able to run BASIC(1989), then I switched to a DOS computer(1993), improved into "DOS with Microsoft LanMan" (1997, it added TCP/IP, ftp, telnet, ping, ...), then to BeOS(2001), and finally to Linux (2003).

So literally in my learning experience these "user-group model, privileges" things have improved linearly time by time :D :D :D
« Last Edit: December 01, 2020, 11:13:01 am by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #41 on: December 01, 2020, 11:06:02 am »
Another is device support.  DOS supports parallel and serial port, floppy drives, hard drives, and a NUL: device (IIRC), but Linux supports a lot of different devices, down to GPIO pins if your computer or device has those.

A thing I loved about BeOS which I miss in Linux: the network-class of devices (ethernet, FC, AFDX, etc) doesn't have a devname.

You cannot assign an IP-address with "echo 192.168.1.12 > /dev/eth0/ip", it doesn't work this way, and you need to use a program like "ifconfig etho0 192.168.1.12", and if you open a socket, there here is no "/dev/eth0/sock-xxxx" exported to the devfs (/dev/), and you need programs like NetCat to recreat this functionality (it's useful to clone a partition through the network, or just to chat), while on BeOS you had all of these exported by the Kernel.

I loved it, really!  :D

Ok, with Python it's not a problem nowadays. You open an editor, you write a few lines, and you get a socket-ID from which you can receive some byte, and write some byte. No, problem, just a bit of nostalgia.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #42 on: December 01, 2020, 11:24:36 am »
One big difference is due to privilege separation.  DOS has none, Linux has full user-group model and independent capabilities models.

BeOS v4 and v5 is (somehow) UNIX-like OS, but it never had any user-group model, multi-users and privileges. You have a bash shell, gcc, and a subset of the UNIX programs (dd was banned and not includes in the install for some reason, I had to compile it myself), and are always logged as "root".

Insanely great, ain't it?
I am so paranoid (or used to not trusting any user, including myself), that that gives me the heebie-jeebies! :scared:

A thing I loved about BeOS which I miss in Linux: the network-class of devices (ethernet, FC, AFDX, etc) doesn't have a devname.
Well, you could always write a character device driver that does that, just like say GPIO drivers do.  (They also expose new device nodes dynamically.)

It wouldn't even be that complicated, since the existing iptables (netfilter, actually) talks to the kernel via a netlink socket.  Your custom BeOS-like driver would just provide a parallel access to it.

It might be an interesting exercise, if one is familiar with the BeOS interface (since one has an idea how it should work – that's the part that takes the most work!), and wants to learn how to write Linux driver modules.  The Linux Device Drivers books (especially 3rd edition freely available on the net) are very useful, but somewhat dated; one must check most things against the Linux kernel developer documentation to make sure one is using the most up-to-date interfaces.

As to TCP and UDP sockets, Bash does provide /dev/tcp/host/port and /dev/udp/host/port functionality (by interposing such paths with its own support machinery).  You can do things like
Code: [Select]
#!/bin/bash
exec 3<>/dev/tcp/duckduckgo.com/80
printf 'GET / HTTP/1.0\r\nHost: duckduckgo.com\r\nConnection: close\r\n\r\n' >&3
cat <&3
to do your own HTTP client.  This is ages old, and was/is used in e.g. aforementioned LinuxFromScratch to obtain the IANA Service registry (human-readable version; the file that tells which TCP or UDP port number each service name uses), without an existing HTTP client, early enough in the build process.
 
The following users thanked this post: DiTBho

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #43 on: December 01, 2020, 12:01:39 pm »
BeOS is dead nowadays, it won't for sure run on a modern PC, there are no drivers, and Be Inc, the company that was operated by Jean-Louis Gassée as leader is no more existing.

The company was sold to Palm years ago, and the idea of BeOS "replaced" by Haiku, as you can read in its presentation "Haiku is an open-source operating system that specifically targets personal computing. Inspired by the BeOS, Haiku is fast, simple to use, easy to learn and yet very powerful".

It's multi-user, and it has user-group privileges. All great add-on :D

There is a beta-CD installer freely downloadable, it contains everything; Haiku is UNIX-like and it's much simpler than Linux (even if more limited, and with less applications), maybe a good dialectical experience for those who come from DOS/Windows.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #44 on: December 01, 2020, 12:48:19 pm »
Haiku is UNIX-like and it's much simpler than Linux (even if more limited, and with less applications), maybe a good dialectical experience for those who come from DOS/Windows.
Yup.  My own observations (across maybe a hundred or two hundred people, but in an Uni environment, spread over a couple of decades; so not random per se) indicate that experience with getting different operating systems (and programming languages!) do what one wants – as opposed to copying existing recipes or learning actions by rote without understanding – makes it somehow easier/more efficient in utilising the existing ones and learning new ones.

I think it is part of "opening up paradigms", or being able to see "outside the box"; something akin to stopping and looking around to perceive the things around you, instead of focusing on the path you are on.  (I do not really know how to express this precisely.)
Part of it is that when you have fewer expectations on how things should work, you are more open to learning how they actually do work in a particular environment.  This applies to all areas of human learning, though, and I see it in science (physics in particular) all the time.  The hot-water-freezes-faster-than-cold-water experiment is a particularly apt example. (Simply put, the hydrogen-oxygen bond length affects the heat capacity; it's an extra degree of freedom that rapid convective cooling affects slower than it affects the overall temperature.  Most physicists still have difficulties accepting it, even though it is a simple experiment easily verified, and even duplicated in simulations.)

This is also why I always recommend learning through understanding, instead of copying recipes as-is.  Sure, you need to initially copy recipes to get stuff done – and getting stuff done is the point – but finding out why and how (in the engineering sense as opposed to science or philosophy) will eventually yield better tools and enables better control and more efficient workflows.  Instead of copying or repeating things, you can create new workflows.
 
The following users thanked this post: paf, DiTBho

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #45 on: December 01, 2020, 01:26:10 pm »

  :popcorn:

I could not avoid some laughs across this one...

And felt  insanely compelled to start a thread
about    CP/M  x  XENIX  x SCO  ....

Like Willy Coyote they always hit the ground...

Paul
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: DOS vs. Linux
« Reply #46 on: December 01, 2020, 03:50:37 pm »
The IBM 1130, for which I have an FPGA version, uses 321 16-bit words per sector times some number of sectors per track times some number of tracks.  I want to map that to a Compact Flash with 512 bytes per sector. (…)
I just hope you know that dd may randomly fill some sectors with garbage and shift data to others. Unless you use GNU Coreutils version of dd, which offers a non-standard fullblock input flag to control that — but then you must remember to always specify it. However, I said “most”. Yes, there are some situations in which dd may be useful. For this particular one, I would still write a separate program.

'dd' is the entire reason I bothered to learn about Linux way back a long time ago.
But… you can use dd on Windows too. Cygwin or MSYS, and I think one should be available through WSL. :D
Or “a long time ago” was more than 15 years ago?

The command line is king.  If you don't want to use it, just run Windows.  Even with PowerShell, Windows doesn't come close to mirroring *nix.  And the Bash shell on Win 10?  Try to print a file or exchange a file with the Windows filesystem.  It's a mess!  I have it on all my Win 10 boxes and never use it.
The reason I switched to Linux was because my Windows was consisting mostly of utilities typically found in Linux-based systems. They were just not fitting well into a non-POSIX environment. So I decided to embrace my inner penguin.

As predicted, I got lots of (and more than expected) input.
Of course! This is the internet! Everyone must have an opinion! ;)

Just to clarify on a few points. I'm not "comparing" Linux to DOS. My question was based on them both being similar since they run command lines and have option (?) such as a forward slash and a letter after a command.
Having a command-line shell is not a feature specific to (MS-?)DOS or Linux-based systems. Nearly any sane operating system offers some. Some larger applications do too. Windows is not an exception and not only it offers two shells by default, they are better and more widely used than those in MS-DOS.
« Last Edit: December 01, 2020, 03:52:44 pm by golden_labels »
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #47 on: December 01, 2020, 04:34:22 pm »
The IBM 1130, for which I have an FPGA version, uses 321 16-bit words per sector times some number of sectors per track times some number of tracks.  I want to map that to a Compact Flash with 512 bytes per sector. (…)
I just hope you know that dd may randomly fill some sectors with garbage and shift data to others. Unless you use GNU Coreutils version of dd, which offers a non-standard fullblock input flag to control that — but then you must remember to always specify it. However, I said “most”. Yes, there are some situations in which dd may be useful. For this particular one, I would still write a separate program.
I didn't mean to imply that I let 'dd' fill the remaining bytes in the 2d sector.  I have a little piece of C code that takes the image file and expands it to another padded file and I use that file as input to 'dd'.
Quote

'dd' is the entire reason I bothered to learn about Linux way back a long time ago.
But… you can use dd on Windows too. Cygwin or MSYS, and I think one should be available through WSL. :D
Or “a long time ago” was more than 15 years ago?
Actually, it was!  I built the project right after retiring in 2003.  For those keeping score, that's 17 years of not showing up at the office.




 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #48 on: December 01, 2020, 05:31:25 pm »
Very true, but because of muscle memory, it is too easy to forget the Bash built-ins are pushd and popd instead of just push and pop.  (I'd like to give rstofer a little slack here, because I too make that sort of errors regularly.)

It is good to point out the correct commands, though. :-+

Yes!  I must have been tired!  The good news is that push and pop will both generate an error and then I can extract my head from my *** and type pushd or popd.

Code: [Select]
$ push /etc
bash: push: command not found
$

That's also the problem with discussing Linux while working on a Win 10 machine!  Or worse, an iPad!
Quote


Habits .. yes, I had to unlearn "dir" and use "ls", but I think forgetting "pip" and using "cp" took my fingers longer.
pip was from CP/M, wasn't it?  Oops, no, originally from DEC PDP-6!

Also, the file specification strings part of the Wikipedia article on PIP shines interesting light on DOS device naming as well.

I have always had a Linux alias for dir.  I didn't have much problem forgetting pip but it turns out I was using it a couple of weeks ago on a Z80 CP/M project.

https://rc2014.co.uk/

Completely off topic:  I have a couple of the new Raspberry Pi 400 computers.  They are really slick in that the computer is built inside the keyboard.  They left out the audio connector!  Bummer!  I think I'm supposed to add Bluetooth Audio.

https://youtu.be/ZSvHJ97d8n8

I gave up on the RPi mouse, it doesn't have the forward/back buttons.  A nice Bluetooth mouse is a lot more usable.

The keyboard unit also has a USB hub - 2 USB 3.0 and 1 USB 2.0.

It makes a very tidy, and silent, Linux desktop and the speed is adequate for most things.  It's a nice place to do Linux kinds of things.  REMEMBER: It's an ARM processor so you either have to download RPi packages or build from source.  There's a reason I remember this!
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #49 on: December 02, 2020, 01:14:04 am »
I recommend taking a look at Linux From Scratch.  It is basically an online book by Gerard Beekmans, Matthew Burgess, Bruce Dubbs, and others, for how to build a Linux system directly from sources.

You need be a Linux user in order to benefit from this book, so it seems a bit cart before horse. If that's the route to take, I would recommend cross-compiling Linux on the OS the user is comfortable with (and since we're talking DOS presumably that would be Windows). If  nothing else, it would introduce the user to where those CLI 'commands' come from and how to get them :)
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #50 on: December 02, 2020, 08:51:31 am »
I recommend taking a look at Linux From Scratch.  It is basically an online book by Gerard Beekmans, Matthew Burgess, Bruce Dubbs, and others, for how to build a Linux system directly from sources.

You need be a Linux user in order to benefit from this book, so it seems a bit cart before horse. If that's the route to take, I would recommend cross-compiling Linux on the OS the user is comfortable with (and since we're talking DOS presumably that would be Windows). If  nothing else, it would introduce the user to where those CLI 'commands' come from and how to get them :)
Cart before horse? Stop twisting my words! >:(

I wrote
Story time:

When I developed a self-replicating USB stick [...]

If that sort of thing interests you, I recommend taking a look at Linux From Scratch.
I was NOT putting a cart before a horse, and recommending someone with DOS experience to go build Linux From Scratch.
I told a personal story, about building a custom dedicated Linux OS from sources, and how basic that environment was; but if that sort of thing interests one, that book is the way to go.

I DO NOT recommend building Linux from Scratch, or going say Gentoo way, as a way to learn Linux.  It is a way to learn how to build a Linux system.
« Last Edit: December 02, 2020, 06:23:54 pm by Nominal Animal »
 

Offline Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: DOS vs. Linux
« Reply #51 on: December 02, 2020, 09:10:57 am »

I DO NOT recommend building Linux from Scratch, or going say Gentoo way, as a way to learn Linux.  It is a way to learn how to build a Linux system.

I think that it is imperative that cranky newbie Linux users do indeed try LFS. Only then will they appreciate the work that goes in to preparing a downloadable ISO.

The double-edged sword is that there are many who think that LFS is as easy as a pre-prepared distro (until they try to do it). And when it doesn't work out for them, the whole internet needs to hear about it.

iratus parum formica
 

Online Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11633
  • Country: my
  • reassessing directives...
Re: DOS vs. Linux
« Reply #52 on: December 02, 2020, 09:37:38 am »
some people dont want to build oscilloscope, they just want to use it for other purpose. let the building of oscilloscope to the oscilloscope maker.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 
The following users thanked this post: Ed.Kloonk, DiTBho

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #53 on: December 02, 2020, 10:06:32 am »
I DO NOT recommend building Linux from Scratch, or going say Gentoo way, as a way to learn Linux.  It is a way to learn how to build a Linux system.
I think that it is imperative that cranky newbie Linux users do indeed try LFS. Only then will they appreciate the work that goes in to preparing a downloadable ISO.

The double-edged sword is that there are many who think that LFS is as easy as a pre-prepared distro (until they try to do it). And when it doesn't work out for them, the whole internet needs to hear about it.
I disagree, because after all, it is just a tool, and you don't need to be aware of the effort or cost that went into the tool to wield it properly.
(Which is basically what Mechatrommer said above.)

The possible workflows differ a lot.  Some users do not need to know the OS details at all.  They might even have good grasp of Bash scripting to batch-ify repeated workflows; and that is right now quite portable between Linux, BSDs, and Mac OS; possibly even partially Windows (although I don't know if/how Bash and native Windows apps mesh).  With just a few command-line tools, say inotifywait on Linux, fswatch on Mac, you can do magical folders (that automate format conversions and other such stuff, to another folder), saving a lot of time in certain work situations.

Some users can benefit from OS customization.  For me, small things like personalized splash screens (my own one is Tux rolling his eyes in opposite directions) make it more mine.  An organization certainly benefits from their machines having some such visual details (it changes how cow-orkers treat their machines; they're more willing to consider, ask, and invent better workflows), and the sorts of customizations a capable Linux admin can do.  However, those admins rarely have done any of the work the people there do, and therefore cannot usually imagine what kind of changes would make workflows better.  So, if you have users who understand the basics enough to have suggestions, or at least identify slow/problematic parts of their workflows, they can describe them to the admins in a way the admins can actually act upon.

(Whenever I've talked about Linux to organizations, I've always first done some informal interviews, finding out exactly how people do their jobs.  Which kind of added to my depression; it is amazing how much unnecessary stuff and extra efforts people spend in their day-to-day work.  Easily makes one lose all hope for humanity, if you do it too much, really.)

In other words, I don't even think that ordinary Linux users need to be able to customize their own workflows; but I do believe they need rough understanding enough to describe the problems they are having, in a way that those with the skill (but not the experience with that particular workflow) can suggest and implement the changes.
See?  There are lots of different "depths" of knowledge, and even a rough understanding is extremely useful.  Not everyone needs to be a Linux admin, but everyone should have enough understanding to be able to describe their problems in a manner that can be acted upon.
 
The following users thanked this post: Ed.Kloonk, DiTBho

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #54 on: December 02, 2020, 03:41:24 pm »
some people dont want to build oscilloscope, they just want to use it for other purpose. let the building of oscilloscope to the oscilloscope maker.

That is exactly the case for most users.  I use Mint, I don't care how it is cooked, I just want a bootable ISO image file and reasonable setup instructions.  Me and Google, that's all it takes.  Google will be the most used tool for new Linux users.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #55 on: December 02, 2020, 04:03:55 pm »
apropos ...

It  does no good to sit at the command line, staring at the screen, while having no idea what to type.  Say you want a list of USB devices


Code: [Select]
$ apropos usb
[/font]

gives


Code: [Select]
~ $ apropos usb
lsusb (8)            - list USB devices
sane-canon630u (5)   - SANE backend for the Canon 630u USB flat...
sane-cardscan (5)    - SANE backend for Corex CardScan usb scan...
sane-epjitsu (5)     - SANE backend for Epson-based Fujitsu USB...
sane-find-scanner (1) - find SCSI and USB scanners and their de...
sane-genesys (5)     - SANE backend for GL646, GL841, GL843, GL...
sane-gt68xx (5)      - SANE backend for GT-68XX based USB flatb...
sane-kvs1025 (5)     - SANE backend for Panasonic KV-S102xC USB...
sane-kvs20xx (5)     - SANE backend for Panasonic KV-S20xxC USB...
sane-kvs40xx (5)     - SANE backend for Panasonic KV-S40xxC USB...
sane-ma1509 (5)      - SANE backend for Mustek BearPaw 1200F US...
sane-mustek_usb (5)  - SANE backend for Mustek USB flatbed scan...
sane-mustek_usb2 (5) - SANE backend for SQ113 based USB flatbed...
sane-pieusb (5)      - SANE backend for USB-connected PIE Power...
sane-plustek (5)     - SANE backend for LM983[1/2/3] based USB ...
sane-sm3600 (5)      - SANE backend for Microtek scanners with ...
sane-sm3840 (5)      - SANE backend for Microtek scanners with ...
sane-u12 (5)         - SANE backend for Plustek USB flatbed sca...
sane-usb (5)         - USB configuration tips for SANE
usb-devices (1)      - print USB device details
usb_modeswitch (1)   - control the mode of 'multi-state' USB de...
usb_modeswitch_dispatcher (1) - Linux wrapper for usb_modeswitc...
usbhid-dump (8)      - dump USB HID device report descriptors a...
[/font]

The items of interest at the command line are tagged (1) for ordinary users and [8] for super users.  How do I know that?

https://linux.die.net/man/

Sorry about the square brackets, parenthesis 8 seems to include an icon

Here's the output of 'lsusb' on my Pi 400


Code: [Select]
~ $ sudo lsusb
Bus 002 Device 009: ID 0781:5581 SanDisk Corp. Ultra
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 04d9:0007 Holtek Semiconductor, Inc.
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
[/font]

You can see that it has recognized my SanDisk thumb drive

Want to know a bit about Python?


Code: [Select]
~ $ apropos python
[/font]

One of the entries is 'thonny', an IDE for Python  It also shows up in the Programming menu but I have never tried it.

Code: [Select]
thonny (1)           - Python IDE for beginners
[/font]

« Last Edit: December 02, 2020, 04:18:14 pm by rstofer »
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #56 on: December 02, 2020, 04:57:31 pm »
Quote
The items of interest at the command line are tagged (1) for ordinary users and [8] for super users.  How do I know that?

https://linux.die.net/man/

So we have a new user who is trying to find his way around, and the command you go for is the only one marked 'superuser' rather than the one marked 'you, yes you, the new ordinary user'?
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #57 on: December 02, 2020, 05:06:16 pm »
Quote
Fuck you, you quoted me off context, twisting my words!

You're welcome.

It's always difficult trimming walls of text to highlight just the relevant part. What may seem relevant to me at that time perhaps won't to you coming back to it with a different mindset.  Don't forget that this thread is in the context of a new user, not a guru wanting to check the nuts and bolts.

Any twisting was unintentional, but I accept that what I was seeing may not have been what you intended to be seen or still think it looks like.

Er... I guess this could go downhill so better add the wall for reference too. Please don't take me on for top-posting!

I recommend taking a look at Linux From Scratch.  It is basically an online book by Gerard Beekmans, Matthew Burgess, Bruce Dubbs, and others, for how to build a Linux system directly from sources.

You need be a Linux user in order to benefit from this book, so it seems a bit cart before horse. If that's the route to take, I would recommend cross-compiling Linux on the OS the user is comfortable with (and since we're talking DOS presumably that would be Windows). If  nothing else, it would introduce the user to where those CLI 'commands' come from and how to get them :)
Cart before horse? Fuck you, you quoted me off context, twisting my words! >:(

I wrote
Story time:

When I developed a self-replicating USB stick [...]

If that sort of thing interests you, I recommend taking a look at Linux From Scratch.
I was NOT putting a cart before a horse, and recommending someone with DOS experience to go build Linux From Scratch.
I told a personal story, about building a custom dedicated Linux OS from sources, and how basic that environment was; but if that sort of thing interests one, that book is the way to go.

I DO NOT recommend building Linux from Scratch, or going say Gentoo way, as a way to learn Linux.  It is a way to learn how to build a Linux system.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #58 on: December 02, 2020, 06:03:02 pm »
Quote
The items of interest at the command line are tagged (1) for ordinary users and [8] for super users.  How do I know that?

https://linux.die.net/man/

So we have a new user who is trying to find his way around, and the command you go for is the only one marked 'superuser' rather than the one marked 'you, yes you, the new ordinary user'?

In the default Pi environment, the user 'pi' is automatically a superuser.  Totally insecure but that's the way it works.  'sudo' doesn't even require a password!

I'm running as an ordinary user on the Pi and lsusb works because it, and all the other ls... commands in /usr/bin have permissions 755.  The apropos entry is incorrect.

Same story in Mint - all ls... commands have permissions = 755

A new user won't spend 5 minutes configuring Linux from the command line without running into 'sudo' and the requirement for the user to be in the 'sudo' group.  Unfortunately, using 'addgroup' requires superuser permissions and this is the reason that, in the RPi world, the default user is automatically a superuser.

In the Mint world, you need to log in as 'root' to add the first user to the sudo group.

And this is why Linux is on less than 2% of workstations.  The learning curve is STEEP!

« Last Edit: December 02, 2020, 06:10:53 pm by rstofer »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #59 on: December 02, 2020, 06:22:46 pm »
Don't use linux.die.net for man pages, use the proper upstream, https://www.kernel.org/doc/man-pages/ (for the directory), or direct links to https://man7.org/linux/man-pages/.

Section 8 is for "administration and privileged commands".  It includes commands that work for all users (lsusb, lspci, and lsof being commonly used by all users).  Just because a command is described in section 8, does not mean it is only available to superuser.
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #60 on: December 02, 2020, 06:25:16 pm »
Quote
all ls... commands have permissions = 755

OK. It is easy to conflate the semantic differences when talking about user abilities. Is a superuser someone who knows what they are doing - a guru, perhaps - or someone with a technical attribute (here, a security clearance)? The context of this thread, which concerns a user not well versed in the art of Linux, has a bearing on it.



 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #61 on: December 02, 2020, 06:30:14 pm »
Don't forget that this thread is in the context of a new user, not a guru wanting to check the nuts and bolts.
I think that "If that sort of thing interests you, then ..." is quite a limited recommendation, because the sort of thing that was described was obviously quite a lot of work that not many Linux users would be interested in.  Changes the meaning of the entire sentence.

Any twisting was unintentional, [...]
In that case, I need to use less angry language.  (Edited that post to reflect that.)
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #62 on: December 02, 2020, 06:46:23 pm »
Is a superuser someone who knows what they are doing - a guru, perhaps - or someone with a technical attribute (here, a security clearance)?
Superuser is (a user controlling) a process that has full root privileges; one whose actions the Linux kernel will not restrict.

There are two ways to have those:
  • Have UID 0.  This is the traditional root, and the default target user when using sudo.
    or
  • Have all relevant Linux capabilities, regardless of the UID of the current process.

Linux capabilities are usually granted to a process by setting filesystem capability flags.  (This itself requires either UID 0, or the CAP_SETFCAP capability.) See man 8 setcap for the utility for doing this.
For example, if you want a binary program foo to be able to switch to root user, but don't want to make it setuid root, give it the CAP_SETUID and CAP_SETGID capabilities via setcap cap_setuid,cap_setgid=pe foo .
Linux capabilities can also be inherited from the parent process, but it requires the capabilities to be inheritable (i in the mask set; p is permitted and e is effective).
The capabilities a process can obtain from filesystem capability flags can also be restricted (this is called the bounding set), as the bounding set is inherited from the parent process.  Once dropped from the bounding set, that capability is impossible to regain by the current process or by any child processes.

(There is also secure execution filters; see man 2 seccomp, that can be used to limit the current process and any child processes to a subset of syscalls or syscall parameters.  It is useful when one wants to run e.g. untrusted code without allowing any access to syscalls, or only to a subset of "deemed safe" syscalls like read/recv/pread, write/send/pwrite, clock_gettime() and so on.  But this is completely separate from Linux capabilities, and is basically a filtering mechanism applied by the kernel at the userspace-kernel boundary.)

I don't like using "root" to indicate the technical can-do-anything-user-account, as there is that other option, Linux capabilities, that is completely separate.

But whenever I use "superuser", I mean it in the technical sense: (user running) a process that the Linux kernel allows to do anything.
« Last Edit: December 02, 2020, 06:53:27 pm by Nominal Animal »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #63 on: December 02, 2020, 08:21:53 pm »
'sudo' doesn't even require a password!

That's not really a big problem. It still means you have to think and type something extra before the really dangerous actions.

On a personal system, someone finding your screen unlocked and typing "rm -rf ~" or "sudo rm -rf /" makes little practical difference to you.
 

Offline Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: DOS vs. Linux
« Reply #64 on: December 02, 2020, 10:28:14 pm »
'sudo' doesn't even require a password!

That's not really a big problem. It still means you have to think and type something extra before the really dangerous actions.

On a personal system, someone finding your screen unlocked and typing "rm -rf ~" or "sudo rm -rf /" makes little practical difference to you.

That's the best way I know how to completely fsck a Linux system.

iratus parum formica
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #65 on: December 03, 2020, 02:13:59 am »
'sudo' doesn't even require a password!

That's not really a big problem. It still means you have to think and type something extra before the really dangerous actions.

On a personal system, someone finding your screen unlocked and typing "rm -rf ~" or "sudo rm -rf /" makes little practical difference to you.

That's the best way I know how to completely fsck a Linux system.

I expect sudo dd if=/dev/null of=/ bs=1M would work about as well.
 
The following users thanked this post: Ed.Kloonk

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #66 on: December 03, 2020, 03:26:05 am »
Late in the book "Raspberry Pi For Secret Agents" there is an example of  a script that wipes the user's home directory if there were 3 failed attempts at logging in.  It also creates a new, empty, home directory.

Not to worry, the script follows the part in the book where they show how to encrypt the home directory such that low level scans don't produce useful information.

https://www.amazon.com/Raspberry-Pi-Secret-Agents-Third-ebook/dp/B01HF2HS7S

Somehow it wound up in my Kindle directory so I must have clicked on something.  I'm pretty sure I wound up paying for it...
 

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
Re: DOS vs. Linux
« Reply #67 on: December 03, 2020, 04:31:11 am »
Quote
all ls... commands have permissions = 755

OK. It is easy to conflate the semantic differences when talking about user abilities. Is a superuser someone who knows what they are doing - a guru, perhaps - or someone with a technical attribute (here, a security clearance)? The context of this thread, which concerns a user not well versed in the art of Linux, has a bearing on it.

'superuser' has nothing to do with it anyway. The definition of section 1 is "general commands", and 8 "system administration commands" - listing USB devices is not 'general' and does indeed belong in section 8.
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #68 on: December 03, 2020, 04:51:07 am »
Thanks for the clarification.
 

Offline bpiphany

  • Regular Contributor
  • *
  • Posts: 129
  • Country: se
Re: DOS vs. Linux
« Reply #69 on: December 03, 2020, 08:36:23 am »

I expect sudo dd if=/dev/null of=/ bs=1M would work about as well.

I often have to correct myself too, but that doesn't do anything. You meant
dd if=/dev/zero of=/
add
status=progress
to see exactly how much damage you manage to cause.

Another fun thing to do is
kill 1
which fails (since you really meant %1), and you follow up with
sudo !!


edit: it actually looks like
dd if=/dev/zero of=/
doesn't do anything either. Without trying it as root '/' is not a file and cannot be written to this way.
« Last Edit: December 03, 2020, 08:43:17 am by bpiphany »
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #70 on: December 03, 2020, 09:18:15 am »
Code: [Select]
highlander# uptime
 09:15 up 11 years

highlander# whoami
root

highlander# last | head -n1
root     tty1                          Thu Dec 03 2009 - 20:19

highlander# last | grep `mydate-today`
root     pts/0        192.168.1.14     Thu Dec 03 2020 - 09:15

Well, I have been using my Linux console as root for *exactly* 11 years
and anything catastrophic hasn't yet happened

what am I doing wrong? :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #71 on: December 03, 2020, 09:21:30 am »
Code: [Select]
highlander# sudo
not found in the current path

highlander# su
not found in the current path
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 
The following users thanked this post: Ed.Kloonk

Offline Ed.Kloonk

  • Super Contributor
  • ***
  • Posts: 4000
  • Country: au
  • Cat video aficionado
Re: DOS vs. Linux
« Reply #72 on: December 03, 2020, 09:35:22 am »
Code: [Select]
highlander# uptime
 09:15 up 11 years

highlander# whoami
root

highlander# last | head -n1
root     tty1                          Thu Dec 03 2009 - 20:19

highlander# last | grep `mydate-today`
root     pts/0        192.168.1.14     Thu Dec 03 2020 - 09:15

Well, I have been using my Linux console as root for *exactly* 11 years
and anything catastrophic hasn't yet happened

what am I doing wrong? :D

That's a lot more than five nines there. Amazeballs.
iratus parum formica
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #73 on: December 03, 2020, 09:38:19 am »
Guys, if you are worried about that, I would suggest to exercise on a LiveCD-demo so you can't cause any permanent issues since it's a read-only media with the rootfs copied from the CD into into the ram at every boot.

11 years ago, I preferred to practice this way for a couple of months (on Minix) in order to understand things and how to manage stress rather than hiding the fear of causing some damage behind a "sudo" command, which was not even installed.


The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: DOS vs. Linux
« Reply #74 on: December 03, 2020, 04:39:24 pm »
Guys, if you are worried about that, I would suggest to exercise on a LiveCD-demo so you can't cause any permanent issues since it's a read-only media with the rootfs copied from the CD into into the ram at every boot.
One should not forget that the local media are still available and their content can be destroyed. A virtual machine is the safest option. With the added benefits of being able to act as root and damage everything without consequences. Damging things is good, just like in electronics. :D

11 years ago, I preferred to practice this way for a couple of months (on Minix) in order to understand things and how to manage stress rather than hiding the fear of causing some damage behind a "sudo" command, which was not even installed.
And here the important note: sudo exists to limit risk. Unlike logging in as root, with sudo one has control over what, how and by whom can be invoked.
People imagine AI as T1000. What we got so far is glorified T9.
 
The following users thanked this post: DiTBho

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #75 on: December 03, 2020, 07:44:00 pm »
If you mess something up and it appears to be unrecoverable, just reinstall the system.  Yes, on 'mature' systems, there will be a lot of lost applications and likely some important files but those were all backed up somewhere else.

It's not nearly as hard to reinstall a Linux system as Windows.  Among other things, I don't need a secret-squirrel code to activate the installation.

The Raspberry Pi is especially nice in this regard because the imager just lays down a new copy on the SD card and you're good to go.  The 'Accessories -> SD Card Copier' utility will make an image of the existing system card.  All you need is a USB <=> SD Card gadget

https://www.amazon.com/gp/product/B006T9B6R2

It is worth considering 256 GB SD cards...

You may need a male by female USB A extension cable to get the gadget away from other devices plugged into the computer.

How many stories have we heard about people wiping out their 5-1/4" system floppies?

There are many web pages with lists of bash commands.  Some are probably correct, others seem to have problems.  On one, 'mv' is described as useful for moving directories but makes no mention of moving files or just renaming them.

A fun command I didn't know was there:  'compgen'

Code: [Select]
$ compgen -c
will list all 1912 directly executable commands on my Pi and AFAICT doesn't include things installed over in /opt.  I really have no idea which directories are searched but probably those on the PATH.  My RISC cross-compiler installed in /opt is not on the path and not on the list. There doesn't seem to be a 'man' page... 

Code: [Select]
$ compgen -c | sort
will make the same list in sorted order

Code: [Select]
$ compgen -c | sort | more
And this will allow paging so you can actually see something

Code: [Select]
$ compgen -c | wc -l
will just display the number of commands

Code: [Select]
$ compgen -a
will display all active aliases


It's about the piping thing!  The big feature of Unix was pipes and making everything look like a file.

As I wrote this, I installed the Free Pascal compiler.  compgen now shows 2127 executables.

https://www.geeksforgeeks.org/compgen-command-in-linux-with-examples/

And Google is your best friend!

« Last Edit: December 03, 2020, 08:02:35 pm by rstofer »
 

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
Re: DOS vs. Linux
« Reply #76 on: December 03, 2020, 08:03:53 pm »
A fun command I didn't know was there:  'compgen'

This is a bash builtin. Like everything else in bash, you'll find the documentation in the bash manpage.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #77 on: December 03, 2020, 09:49:58 pm »

I expect sudo dd if=/dev/null of=/ bs=1M would work about as well.

I often have to correct myself too, but that doesn't do anything. You meant
dd if=/dev/zero of=/
add
status=progress
to see exactly how much damage you manage to cause.

Another fun thing to do is
kill 1
which fails (since you really meant %1), and you follow up with
sudo !!


edit: it actually looks like
dd if=/dev/zero of=/
doesn't do anything either. Without trying it as root '/' is not a file and cannot be written to this way.

Yeah, you'll actually want /dev/sda or whatever.

And, yep, /dev/zero. Or /dev/random
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #78 on: December 03, 2020, 11:31:57 pm »
Here's a pretty decent FREE Kindle book on bash scripting but it starts with the very basics.

I'm starting at the beginning to fill in the gaps.  I'm not overly interested in scripting but it's something I should be familiar with or at least have reference material.

https://www.amazon.com/gp/product/B081D8JFCM
 
The following users thanked this post: DiTBho

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: DOS vs. Linux
« Reply #79 on: December 05, 2020, 11:48:25 am »
Quote
Recently I used DD to image a drive
Quote
In DOS, if you want to 'copy' something, you use 'copy', but it seems in Linux a plethora of commands exist to do the same thing.
Well, your claim of simplicity for DOS just isn't true.  "copy" works for simple files on DOS.  It wouldn't work on a directory, and it wouldn't help for your drive image copy.  You'd have to find "xcopy" and "diskcopy."   Linux's "cp" and DOS's "copy" are pretty much exactly similar (I think cp has more options - you can copy directory hierarchies with cp instead of needing a separate program like xcopy.)

Yeah, linux command-line program names are ... short and obscure.  You don't really have to learn very many to be moderately proficient (you didn't have to learn many DOS commands, either.  So ... still about even.  A modern linux system will have a whole bunch of other stuff beyond what DOS ever provided, and DOS probably had a lot of commands that you never learned, either.  (Certainly the "cmd prompt" of a modern windows system can potentially run MANY .exe programs installed in the c:\windows hierarchy that I have no idea what they do...))


To find commands, become more familiar with the "man" (manual) command.

"man -k keyword" will list all the man pages about copying things.   Unfortunately, these days this is "polluted" with documentation for library functions for various languages and libraries that you might have installed (ie the memcpy() C function, and also "FcCacheCopySet" and other obscure stuff.

Actual commands are in "Section 1" of the manual.  Unfortunately, I don't see a way to get "man -k" only look in section 1 :-(
You can get reasonable results with: "man -k copy | grep "(1)"":

billw@VB-ubuntu:~$ man -k copy | grep "(1)"
cp (1)               - copy files and directories
cpio (1)             - copy files to and from archives
dd (1)               - convert and copy a file
debconf-copydb (1)   - copy a debconf database
gvfs-copy (1)        - (unknown subject)
install (1)          - copy files and set attributes
mcopy (1)            - copy MSDOS files to/from Unix
objcopy (1)          - copy and translate object files
rcp (1)              - secure copy (remote file copy program)
rsync (1)            - a fast, versatile, remote (and local) file-copying tool
scp (1)              - secure copy (remote file copy program)
ssh-copy-id (1)      - use locally available keys to authorise logins on a r...
x86_64-linux-gnu-objcopy (1) - copy and translate object files


If you're lucky, an individual man page ("man scp", say) will have a "see also" section at the end referring you to related commands.



 

Offline SparkyFX

  • Frequent Contributor
  • **
  • Posts: 676
  • Country: de
Re: DOS vs. Linux
« Reply #80 on: December 05, 2020, 02:03:16 pm »
Often i find myself in front of systems on which installing additional packages is a problem, so making most use out of a basic system is an advantage.

People that just want to try out such commands or need some functions on windows might want to check out cygwin.
And if you are fond of the Norton Commander, there is also the Midnight Commander on linux.
Support your local planet.
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: DOS vs. Linux
« Reply #81 on: December 05, 2020, 02:37:23 pm »
Uhh… dd is not like any disk-whatever software of any system. It can work on disks only because the operating system exposes them as files and the user is passing the name of those files to dd. So can literally any other piece of software if it has no specific requirements about file characteristics. The feature is in the Unix-inspired operating systems, not in dd.

dd is associated with disk for cultural rather than technical reasons. When ancient dinosaurs were still inhabiting Earth, many other tools were not binary safe and so users had to resolve to using any program not mangling binary data. Old habits die hard and this is the only reason dd is still used for that purpose.(1) In 2020 not only dd rarely offers advantage over e.g. cat, but it’s harmful. The manual casually fails to clearly mention what dd actually does. If you think you know, take a simple test: what will this command do?
Code: [Select]
dd if=10mebibyte-file of=destination bs=1M count=10Is you answer: copies 10MiB (10·1M) from “10mebibyte-file” to “destination”? If yes, you have failed the test. :P

dd performs a read from if into a buffer of bs bytes and then performs a write of the same size as the read into of. Repeats count times. The fine print: the buffer is bs bytes, not the number of bytes read. The number of bytes transferred in a block may be smaller — even zero. It will not re-read to fill the block to the full size before writing it. Which means that it may as well write 10 times nothing. And yes: that does happen and did lead to data corruption/loss.

To somewhat remedy the issue, dd from GNU Coreutils is nice enough to at least scream at you if that occurs printing “dd: warning: partial read”. It also has a non-standard flag for iflag to force the program to attempt re-reads until a full block is collected: fullblock. But both of those are non-standard features specific to that very version. Even with GNU’s implementation there is usually no reason to take the risk unless you really need something found only in dd.

Therefore: stop perpetuating promotion of dd for copying data and associating it with disks.
____
(1) There are some jobs in which various features of dd may be useful, but nearly all uses you find in suggestions on the internet are not among those tasks.
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #82 on: December 05, 2020, 02:40:31 pm »
Actual commands are in "Section 1" of the manual.  Unfortunately, I don't see a way to get "man -k" only look in section 1 :-(
Use
    man -s 1 -k term
It is especially useful when looking for a POSIX C library function, when one is unsure if it is in section 3 or section 2 (in this order of preference):
    man -s 3,2 -k term
Similarly, for administrative commands you probably want
    man -s 1,8 -k term
and for games and such
    man -s 6 -k term

Now, nobody has time to remember those, so I suggest putting in your profile (.profile in your home directory):
Code: [Select]
alias manc='man -s 1,8'
alias mang='man -s 6'
alias manf='man -s 3,2'
so you can use manc command or manc -k topic etc. to only look in the relevant sections.

Listing library function manual pages based on match in the function name is slightly more aggravating, as --names-only includes the short description.  For those, I suggest a trivial Bash function (that you can put in your .profile, if you use bash):
Code: [Select]
function whatfunc() {
    for term in "$@"; do
        man -s 3,2 -k "$term" | awk '$1 ~ '"/$term/"
    done | sort -u
}
I often want just plain C and POSIX functions, and I have both mawk and gawk installed, with mawk being much faster in this case, so I use
Code: [Select]
function whatc() {
    for term in "$@"; do
        man -s 3,2 -k "$term" | mawk 'BEGIN { ok["(2)"]++; ok["(3)"]++; ok["(3posix)"]++ } ($2 in ok) && ($1 ~ '"/$term/)"
    done | sort -u
}
In both cases all arguments are (separate) regular expressions, so e.g. whatc ^read lists all POSIX C library functions that start with read.
« Last Edit: December 05, 2020, 02:44:10 pm by Nominal Animal »
 
The following users thanked this post: DiTBho

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #83 on: December 05, 2020, 03:04:34 pm »
It also has a non-standard flag for iflag to force the program to attempt re-reads until a full block is collected: fullblock

Code: [Select]
SYNOPSIS
       #include <stdio.h>

       size_t fread(void *ptr, size_t size, size_t nmemb, FILE *stream);
       size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);

fread returns the actual number of byte read, so if the returned value is different from "nmemb", the developer should just have to implement a while() block to achieve the rest got read.

Hasn't been this implemented? I will verify, but I doubt a "serious" developer would make such a silly mistake.

Anyway, I have been using dd to copy partitions (even across the network, dd coupled with nc) for 11 years, and haven't yet found a single byte lost/corrupted.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #84 on: December 05, 2020, 03:09:18 pm »
Alternative to DD on Linux: Gparted!

Gparted is an amazingly flexible tool that serves as a graphical partition editor built for the GNOME desktop environment, but it can do much more than just edit partitions. One nifty trick I discovered it can do is copy partitions from one drive to another.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2217
  • Country: 00
Re: DOS vs. Linux
« Reply #85 on: December 05, 2020, 03:18:56 pm »
fread returns the actual number of byte read,  ...

Nope.

On  success,  fread()  and  fwrite() return the number of items read or written.
This number equals the number of bytes transferred only when size is 1.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #86 on: December 05, 2020, 04:45:09 pm »
If there is an alternative to 'dd' for my application, I'm certainly willing to look into it.

I have a file with 3248 sectors of 512 bytes each (1,662,976 bytes).  It is the image of the IBM 1130 system disk.  It needs to be laid down on a Compact Flash in exactly the format it has.  Start at LBA 0 and write the entire image.  Don't think, don't interleave, just copy the image!

I don't need an underlying file system, don't want any kind of formatting or partitioning and I certainly don't want an MBR.  Just a plain binary copy from the file to the CF.

sudo dd if=./disk.img of=/dev/sdc   <-- or whatever dmesg says is the CF

Neither Linux nor dd need to know anything about the file, all I want to do is lay it down.

If there is a better utility, demonstrably better, than 'dd', which I can bury in a Makefile, I'm all up for learning about it.

I might consider a GUI application but since I am already at the command line doing cross-assembly and image building, leaving the terminal session wouldn't be considered a positive.

Incidentally, dd can also be used to copy the image from the CF to a file for backup purposes.  Kind of handy!

« Last Edit: December 05, 2020, 04:46:44 pm by rstofer »
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #87 on: December 05, 2020, 05:29:05 pm »
Neither Linux nor dd need to know anything about the file, all I want to do is lay it down.

As I cannot always have X11, a good alternative I am evaluating is: Dcfldd, see here (project website)

It's described as an enhanced version of dd developed by the U.S. Department of Defense Computer Forensics Lab. It has some useful features for forensic investigators such as:
  • On-the-fly hashing of the transmitted data
  • Progress bar of how much data has already been sent
  • Wiping of disks with known patterns
  • Verification that the image is identical to the original drive, bit-for-bit(1)
  • Simultaneous output to more than one file/disk is possible
  • The output can be split into multiple files
  • Logs and data can be piped into external applications


(1) with dd, I usually use md5sum
e.g. you want to clone file.raw into /dev/sda1 and verify the copy is identical to image

# md5sum file.raw
fb65ba489968b8754e23f08863306218 (it returns something similar)

# md5sum /dev/sda1
fb65ba489968b8754e23f08863306218 (this may take a lot of time)

Then i simply compare these two strings.
If they are equal, then everything is ok.


« Last Edit: December 05, 2020, 05:38:39 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: DOS vs. Linux
« Reply #88 on: December 05, 2020, 05:49:47 pm »
If there is an alternative to 'dd' for my application, I'm certainly willing to look into it. (…)
Pretty much any shell:
Code: [Select]
cat /disk.img >/dev/sdcIf needed to be executed with sudo:
Code: [Select]
sudo cp /disk.img /dev/sdcor
Code: [Select]
sudo tee /dev/sdc </disk.img >/dev/null
I usually use md5sum (…)
Drop caches before hashing the written data. Otherwise you’re likely to hash twice the input file cached in memory, not the output. Note that dropping caches is global: it will affect all cached data. Other processes will have to pull it back from storage when they need it, which of course makes the system slower:
Code: [Select]
sudo sync
sudo sysctl vm.drop_caches=3
sysctl vm.drop_caches in docs. Alternatvely you may try your luck with dd. This is one of the cases in which it may be useful: the iflag=direct option. But it’s not guaranteed to always work.
« Last Edit: December 05, 2020, 06:28:59 pm by golden_labels »
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #89 on: December 05, 2020, 06:28:36 pm »
If there is an alternative to 'dd' for my application, I'm certainly willing to look into it. (…)
Pretty much any shell:
Code: [Select]
cat /disk.img >/dev/sdcIf needed to be executed with sudo:
Code: [Select]
sudo cp /disk.img /dev/sdcor
Code: [Select]
sudo tee /dev/sdc </disk.img >/dev/null

I never thought about cat.  Next time I build an image (which might be never), I'll give it a try.
One thing I get from dd is the number of sectors written.  That seems like a nice cross-check.  But cat, cp and tee are easier to use.  They work because devices are treated as files.

And to think that Unix is nearly 50 years old.  It was way ahead of the other OSes of the era.  It's kind of a shame that Linux isn't more popular than it is.

dc3dd is another candidate

https://sourceforge.net/projects/dc3dd/

Using cp seems to be the most intuitive because the operation really is just a copy.
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #90 on: December 05, 2020, 07:17:27 pm »
Quote
It was way ahead of the other OSes of the era.

You seem quite satisfied with it all. Perhaps you could answer a perennial question I have which no-one seems to have managed to resolve yet:

Is there a linux application that can create an image in a similar way to any of a dozen utils do on Windows/DOS? The important parts:

* Sector copying, typically of only used sectors (although all sectors would be an option, rarely used).

* Mounting of a image as a virtual drive.

* (Biggie, this) Coherent backup of a live system writing to the filesystem.

Superficially this looks like cloning, but it's a bit more sophisticated. Also a blindingly-fast way to get from bare metal to a restored system - file-by-fille restore tends to be lots slower simply because the filesystem has to be manipulated during the process.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #91 on: December 05, 2020, 08:26:52 pm »
Sector copying, typically of only used sectors (although all sectors would be an option, rarely used).
I do not know of a filesystem-agnostic way of copying only used sectors.  I do tend to compress the images (using xz) for storage (for images that contain filesystems I cannot reliably mount, or contain raw data).

Hole-punching would be a simple option (similar to sparse option for dd): blocks that contain all zeroes do not need to be stored.  (That is, if you have a 1 GB file that has only few blocks with nonzero data, the on-disk size it requires can be just a few kilobytes on ext, xfs, etc.)
The problem is that the advanced filesystems (ext, xfs, and so on) do not have a coherent used/free block mapping one can use while the filesystem is mounted, so one could interpose zeroes for the unused blocks.

On filesystems that can be reliably mounted tar etc. are a better option, as it can retain even xattrs of the files, and be trivially compressed and copy-able to a different-size volume.

Mounting of a image as a virtual drive.
Loopback mounts, very commonly used.  You can even specify an offset into an (uncompressed) image.  Some image formats can be mounted read-write, too; not just read-only.

In fact, if you are using a Linux machine, you probably did exactly that when booting the machine.  Most distributions use an initial ramdisk containing kernel modules and a tiny init system (in cpio format) to bring up support for necessary hardware devices before the actual root filesystem is mounted (and 'pivoted' to).  The 'snap' package format also uses loopback mounts.

Coherent backup of a live system writing to the filesystem.
Use LVM (Logical Volume Management); it supports snapshots.  Basically a layer between hardware devices and logical filesystems. Any snapshot can copied at leisure (no need to freeze the filesystem; the snapshot is atomic, and you can continue using the filesystem without affecting the snapshot if the underlying storage has sufficient space to store the subsequently modified blocks).
« Last Edit: December 05, 2020, 08:29:31 pm by Nominal Animal »
 
The following users thanked this post: PlainName

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #92 on: December 05, 2020, 08:50:26 pm »
When the first Linux minilaptops came along (Asus Eee PC and Acer Aspire One, in 2008 or thereabouts), I used a 64 MB tmpfs mount for my Firefox profile.  When I used GDM, I had each instance of Firefox (running as myself) untar the profile to a tmpfs ("ramdisk") mount before Firefox proper was started, and tar the profile back when the last instance of Firefox exited.  GDM had certain bugs that made it impossible to run session scripts reliably.  Then I switched to LightDM, and used session scripts to populate the profile tmpfs mount when I logged in to the GUI, and recompress and tear it down when I logged out, suspended, or hibernated the machine.
This sped up Firefox immensely, and normal rabbit-hole web browsing would typically let the spinny-disk HDD spin down for hours on end.

Something very similar can be done on Linux SBCs that boot from microSD but have plenty (1 GB or more) of RAM.

I've used LVM on servers to make daily backups of black-box-user-modifiable data, by making a temporary snapshot (with a bit of magic, waiting for up to a limit for no user connections to the server to make the snapshot), copying the snapshot over a local dedicated link to another machine, then removing the snapshot.  This ensures that the snapshot is accurate at that point of time, and the only magic there just tries to make sure there are no "slow" modifications in progress at the snapshot time.  (I usually do a double check: check1, snapshot, check2; if check1 and check2 are consistent, then use snapshot; otherwise remove snapshot and retry, unless we cannot wait any longer for a snapshot.)

Virtual server farms (virtual hosting services) use LVM for backups and snapshotting, although they usually have some management software to make that real easy.  I haven't maintained a virtual server farm (only clusters and individual servers), so I don't know the exact details.
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: DOS vs. Linux
« Reply #93 on: December 05, 2020, 08:54:41 pm »
A nice "interactive" alternative to cat/dd is pv. On Linux it can also monitor progress of file descriptors opened by other processes (-d).
Everybody install it now if you haven't already :-+

Is there a linux application that can create an image in a similar way to any of a dozen utils do on Windows/DOS? The important parts:

* Sector copying, typically of only used sectors (although all sectors would be an option, rarely used).
* Mounting of a image as a virtual drive.
* (Biggie, this) Coherent backup of a live system writing to the filesystem.
Low level backup of a live filesystem requires close cooperation with the filesystem driver in the kernel. I don't think anyone supports such things on Linux. A lot of hard edge cases, like what happens if files are added or deleted or moved into an area on disk you have already backed up?

Copy on write snapshots solve this. I believe these are available if you have your FS on an LVM (never tried, and nobody uses LVM on desktop systems) and they are available on BTRFS so perhaps there would be a way to do it on BTRFS.

You can freeze a filesystem (queued writes are flushed, new writes are delayed until unfrozen) and then use any standard utilities to backup the block device. With cat/dd/pv you get a crude image of all sectors, but on XFS you can use xfs_copy which creates a sparse file with an image of the filesystem - all unused sectors are zeroed and turned into "holes" to save space on the backup device. The image can be loopback mounted or xfs_copied back to the disk.

A stupid trick that I have used to efficiently back up an NTFS filesystem from Linux - fill all unused space with a file containing nothing but zeros, create an image using dd with special option which creates a sparse file.

Long story short, situation isn't great, but frankly, I'm not sure if it's that great on Windows either. Seriously, what happens if you move, edit, rename, copy, delete large groups of files/folders during the operation?

edit
BTW, is it really that slow to populate an empty filesystem with files from a tar archive, compared to restoring a low level image? On Linux?
I rarely do such things, but I'm under impression it would run close to the full sequential throughput of the disk. I never really feel like filesystem overhead is a siginficant limit when writing bulk data to an unfragmented FS under Linux.
« Last Edit: December 05, 2020, 09:14:07 pm by magic »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #94 on: December 05, 2020, 09:05:42 pm »
You can get a lot of mileage out of a loopback device which creates a virtual disk as a file and can then create a filesystem and mount it.  From C, you can write code to read/write blocks so I suppose just about anything can be done.  Given that the loopback device can have its sectors addressed, non-filesystem applications can also work.

Will it look like a Windows app?  Probably not.  Ordinary users, those who stick to the GUI desktop, probably have no use for such a utility and the folks playing at the command line have a lot of tools at their fingertips.

https://www.thegeekdiary.com/how-to-create-virtual-block-device-loop-device-filesystem-in-linux/

Even dd allows for an offset (skip) for writing to arbitrary sectors.

I suppose a script file could be created to do just about anything to a file (including a block device).  It may not be pretty but it would probably work quite well.

https://www.computerhope.com/unix/dd.htm

My needs are simple,  I just want to copy an image.

ETA:  I tried the loopback device (link above) on my RPi and it works quite well.  I have no idea what I'm going to do with that experience but something may come up.
« Last Edit: December 05, 2020, 09:37:05 pm by rstofer »
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #95 on: December 05, 2020, 09:30:07 pm »
<some requirements>
<some suggestions>

Thanks. But is there something that already does all that? I am after something I don't need to join Udemy in order to learn how to use it - just say "do the backup right now", or schedule it, and it's done. Just as importantly, and simple single step to doing a restore. (Actually, that might be the more important bit!)
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #96 on: December 05, 2020, 09:43:45 pm »
Quote
Copy on write snapshots solve this.

Are these the same as the Windows volume shadow service?

Quote
Seriously, what happens if you move, edit, rename, copy, delete large groups of files/folders during the operation?

They get buffered from the drive until the backup is done. Anything above the shadow service doesn't know any different and will see the changed files, but the backup has snapshotted the drive at the start and nothing will change for that until it's done.

Quote
BTW, is it really that slow to populate an empty filesystem with files from a tar archive, compared to restoring a low level image? On Linux?

Don't know since I don't currently use Linux seriously (because of the backup situation), and those who do don't seem to think images are worthwhile. However, I would think there is a difference since you couldn't really do faster than blatting sequential sectors onto a disk, whereas with a filesystem you need to keep jumping back and forth, updating it as you add files. Presumably it's just a matter of scale and perception, and if you've never tried both then it could be hard to appreciate the difference.

On Windows it's quite a bit different. So much so that I don't bother with F&F backups at all - everything is an image, and if I need a couple of files or folders I'll mount the image and copy them like that.

Unfortunately, I can't quote figures since it's a loooong time since I did a F&F restore. Backups probably won't show much difference because of caching, compression and all the overheads of writing to a files, etc. It's the restore that is the speed demon (or snail).
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #97 on: December 05, 2020, 09:50:19 pm »
Quote
You can get a lot of mileage out of a loopback device which creates a virtual disk as a file and can then create a filesystem and mount it.

Thanks, but is there anything that actually does this? I am not after a roll-your-own solution (at least, not until I am very much more of a Linux user, which is looking to be a chicken and egg situation).

Quote
Ordinary users, those who stick to the GUI desktop, probably have no use for such a utility

On the contrary, it's precisely those that would benefit. They just don't know it because there isn't anything available for them to use. Kind of like having a box of screws but any quest for help using them just comes up with a massive variety of hammers.

I'm sure you're also aware that many users, not just GUI users, don't actually have any backup at all. That doesn't mean there is no need, just that they haven't got one for any of many reasons, including not knowing they could or should, as well as not knowing how.

Quote
I tried the loopback device (link above) on my RPi and it works quite well.

Thanks for the info  :-+
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: DOS vs. Linux
« Reply #98 on: December 05, 2020, 10:53:18 pm »
The equivalent of LVM in Windows appears to be "dynamic disks". It means you don't partition normally, but create a disk area where chunks of space can be arbitrarily and dynamically allocated and assigned to logical volumes. Linux LVM supports taking COW snapshots of logical volumes - if your logical volume which carries the filesystem is 90% of the total LVM area, then up to 10% can be COW-ed. It doesn't care what filesystem or data are on the volume. Problem is, your FS is smaller than the available space so it's kinda meh to use it solely for snapshoting.

I'm not familiar with Windows VSS, but it appears to work on higher level. I suspect it's integrated with NTFS driver and dynamically allocates free space within the NTFS volume to create temporary copies of files and NTFS metadata as they are modified. At any rate, it surely is better than what I expected when you mentioned 3rd party utilities. 3rd party utilities which are a GUI wrapper over a tightly integrated system feature can of course provide a whole different level of funcitonality.

The closest Linux thing is ZFS/BTRFS snapshots. With one command you create a new "root directory" within the volume which points to all the same old files and subdirectories. When anything is modified, content is duplicated so that the other "root directory" doesn't see the change. Recently XFS also gained some level of snapshot capability. Snapshots exist entirely within one filesystem and persist until they are deleted.

A snapshot can be mounted and backed up to external storage with any archiver. Not sure if there is software to make a sector level copy of a snapshot (while ignoring other snapshots). Seems possible in theory, as the snapshot can be mounted RO or not mounted at all.

On filesystems without snapshots, freezing for the duration of taking backup is the only way to get a consistent image. Meh.
« Last Edit: December 05, 2020, 11:00:12 pm by magic »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: DOS vs. Linux
« Reply #99 on: December 05, 2020, 11:26:50 pm »
VSS is quite a package but I doubt that many single-user systems are running it.  I'm not sure how they can limit the shadow copy creation to 10 seconds but they must have a scheme.

I don't think that Linux has anything like this.  For large servers, it seems like a requirement.

https://docs.microsoft.com/en-us/windows-server/storage/file-server/volume-shadow-copy-service
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #100 on: December 05, 2020, 11:31:00 pm »
But is there something that already does all that?
:-//

Normal users should probably just use
    tar -cSJf $(date +%Y%m%d)-$(id -un).tar.xz -C ~/ .
to take a backup of their home directory and all its contents.  Because of the compression, it will be slow.  If you want it fast, omit compression (and sparse file detection):
    tar -cf $(date +%Y%m%d)-$(id -un).tar -C ~/ .

Quote
Copy on write snapshots solve this.
Are these the same as the Windows volume shadow service?
Don't know, don't use windows.

But you can indeed continue using LVM volumes after the snapshot without affecting the snapshot, as long as the storage device has sufficient space for the changes after the snapshot.

Quote
BTW, is it really that slow to populate an empty filesystem with files from a tar archive, compared to restoring a low level image? On Linux?
I would think there is a difference since you couldn't really do faster than blatting sequential sectors onto a disk,
You obviously haven't used Linux and tar, and are basing your assumptions on how Windows behaves.  That is... unworkable.

On my system, tarballing Supertux2 sources (various types of files, binaries, images, lots of sources; about 8500 files, 1.1G of data) takes 4.2 seconds to tarball and 4.1 seconds to extract, with cold caches, if using uncompressed archives.  Of course, I'm using a fast Samsung 512G SSD (MZVLW512HMJP-000H1) for the storage.  (Note: I do mean cold caches, clearing out both metadata and cached file contents before and after compression and before and after decompression, using sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches ; sync ; echo 3 > /proc/sys/vm/drop_caches' .

tar is pretty good at extracting files in continuous manner, so it does not really "jump around".  The metadata updates are cached by the kernel page cache, and won't immediately hit storage.  Because of filesystem structure, you won't get raw block device speeds, but the difference is minor for the typical filesystems (ext4, xfs, zfs). And, if you use any sort of compression, the compression and decompression will typically be the speed bottleneck.  If the backup is on a slow media, like a DVD-RW or similar, or comes via a non-local network connection, you'll want to use decompression, because the data transfer rate will be the bottleneck.  (You can safely pipe tar data through a SSH pipe, both ways.  Done that too for such backups.)

I don't think that Linux has anything like this.  For large servers, it seems like a requirement.
I just explained LVM does that and more.  If you are saying that Linux needs GUI tools so that monkeys can use the LVM utilities, then ... :palm: ... Let's just say that everyone who is tasked to manage actual servers should know how to use and automate LVM.  It isn't complicated at all, if you are familiar with Linux admin tools.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #101 on: December 05, 2020, 11:31:19 pm »
Often i find myself in front of systems on which installing additional packages is a problem, so making most use out of a basic system is an advantage.

And this is why everyone needs to know vi (not even vim) at least at a basic level :-(  (try "esc esc :q!", folks)

Quote
People that just want to try out such commands or need some functions on windows might want to check out cygwin.

NOOOO PLEEASE NOOOOO.

Cygwin is just endless pain.

And absolutely zero point to it now that you have WSL, which with WSL2 even performs really well as long as you stick to the Linux filesystem and don't want to access USB devices (this is the suckiest part .. WSL1 works for that)
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #102 on: December 05, 2020, 11:45:05 pm »
Normal users should probably just use
    tar -cSJf $(date +%Y%m%d)-$(id -un).tar.xz -C ~/ .
to take a backup of their home directory and all its contents.  Because of the compression, it will be slow.  If you want it fast, omit compression (and sparse file detection):
    tar -cf $(date +%Y%m%d)-$(id -un).tar -C ~/ .

Try lz4 if you want a decent bit of compression and speed as well. Not as good compression as gzip or xz or bzip2 obviously, but a heck of a lot faster, and plenty enough to compress the blank sectors and text files.

Quote
On my system, tarballing Supertux2 sources (various types of files, binaries, images, lots of sources; about 8500 files, 1.1G of data) takes 4.2 seconds to tarball and 4.1 seconds to extract, with cold caches, if using uncompressed archives.  Of course, I'm using a fast Samsung 512G SSD (MZVLW512HMJP-000H1) for the storage.  (Note: I do mean cold caches, clearing out both metadata and cached file contents before and after compression and before and after decompression, using sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches ; sync ; echo 3 > /proc/sys/vm/drop_caches' .

tar is pretty good at extracting files in continuous manner, so it does not really "jump around".  The metadata updates are cached by the kernel page cache, and won't immediately hit storage.  Because of filesystem structure, you won't get raw block device speeds, but the difference is minor for the typical filesystems (ext4, xfs, zfs). And, if you use any sort of compression, the compression and decompression will typically be the speed bottleneck.

Try it with lz4 :-)
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #103 on: December 05, 2020, 11:49:45 pm »
Quote
VSS is quite a package but I doubt that many single-user systems are running it.

AFAIK, if you're running Windows you have VSS available. Users, per se, don't touch it - it's the apps that get involved.

However, before VSS was widely available, some backup apps implemented their own version. Indeed, the wonderful and long-lifed DriveSnapshot still allows you to choose either VSS or their proprietary solution.
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: DOS vs. Linux
« Reply #104 on: December 05, 2020, 11:55:33 pm »
VSS is quite a package but I doubt that many single-user systems are running it.  I'm not sure how they can limit the shadow copy creation to 10 seconds but they must have a scheme.
The relevant propadocumentation is incredibly confusing. There is no way they can back up an NTFS volume of any serious size in 10 seconds. I suppose it means that it takes less than 10 seconds to create a COW snapshot on NTFS and then the rest of the process churns in the background while I/O is unfrozen and applications can continue.

I don't think that Linux has anything like this.  For large servers, it seems like a requirement.
Filesystem snapshots.
What Linux doesn't have is a scheme to notify applications of the snapshot being taken so they can flush their buffers, but if your applications can't recover from an atomic snapshot then they can't recover from a hard reboot either, so ::)
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #105 on: December 06, 2020, 12:08:12 am »
Quote
Normal users should probably just use
    tar -cSJf $(date +%Y%m%d)-$(id -un).tar.xz -C ~/ .
to take a backup of their home directory and all its contents.

I have different sorts of backups for explicit data, but what I am specifically interested in implementing (well, having some app implement) is efficient disaster recovery. Whilst I accept that this method gets your data back, it's  not quick and it's just your data. It's not really suitable for bare metal restore.

Quote
Quote from: dunkemhigh on Today at 10:43:45

    Quote

        BTW, is it really that slow to populate an empty filesystem with files from a tar archive, compared to restoring a low level image? On Linux?

    I would think there is a difference since you couldn't really do faster than blatting sequential sectors onto a disk,

You obviously haven't used Linux and tar, and are basing your assumptions on how Windows behaves.  That is... unworkable.

I am not a current Linux user, no. But I am basing my assumptions not just on Windows experience. (For future reference, I have previously built Linux, and the necessary cross-compiler and build tools, from sources, and additionally written Linux device drivers. I am not completely without Linux experience, it's just that I don't use it now and haven't for quite a while.)

Quote
On my system, tarballing Supertux2 sources (various types of files, binaries, images, lots of sources; about 8500 files, 1.1G of data) takes 4.2 seconds to tarball and 4.1 seconds to extract, with cold caches, if using uncompressed archives.  Of course, I'm using a fast Samsung 512G SSD (MZVLW512HMJP-000H1) for the storage.  (Note: I do mean cold caches, clearing out both metadata and cached file contents before and after compression and before and after decompression, using sudo sh -c 'sync; echo 3 > /proc/sys/vm/drop_caches ; sync ; echo 3 > /proc/sys/vm/drop_caches' .

Well that's jolly impressive, but I have to say it is somewhat missing the point. To achieve that 4.1 seconds of extraction you first need to install and setup your Linux system, no? You could use a live CD perhaps, but you'd still need to do a fair amount of messing about to get to the point where you can start streaming your backed-up system onto the disk.

The ideal would be to insert boot media, optionally browse for the restore source (could be on the network or in a removable drive slot), click restore. Come back with a coffee and your system is exaactly as it was before whatever disaster struck.

This isn't just me being lazy - every option or necessary thought during the process is an opportunity to screw up, and usually one is quite highly stressed at this time, so simpler and easier is better.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #106 on: December 06, 2020, 12:13:21 am »
I don't have Supertux2 sources but I just tried it on an old llvm source tree of 988M I had lying around, using your cache clearing incantation. "find llvm | wc -l" gives 27402.

5:38.5 711417024 tar -cSJf llvm.tar llvm
0:26.5 763861086 tar -cSzf llvm.tar llvm
0:04.8 794145179 tar -cSf - llvm | lz4 >llvm.tar
0:03.7 981084160 tar -cSf llvm.tar llvm

And with hot caches (I won't bother with xz):

0:25.9 tar -cSzf llvm.tar llvm
0:02.1 tar -cSf - llvm | lz4 >llvm.tar
0:00.7 tar -cSf llvm.tar llvm

So lz4 saves 180 MB (19%) while adding only a little over one second to the time.

gzip only saves 30 MB more, while adding more than an extra 20 seconds.

xz saves another 50 MB over gzip, but is soooo slow that it's only worth considering for very slow transmission channels, or large numbers of downloaders.

(tests on 4.2 GHz 2990WX Zen 1+, which is not all that quick these days)
« Last Edit: December 06, 2020, 12:18:05 am by brucehoult »
 

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: DOS vs. Linux
« Reply #107 on: December 06, 2020, 01:59:14 am »
Quote
Use
    man -s 1 -k term
Thanks.It doesn't work on my Mac (which has a 2005 version of "apropos") :-(  (but it does work on ubuntu, so it's still useful advice.)
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #108 on: December 06, 2020, 02:12:53 pm »
Normal users should probably just use
    tar -cSJf $(date +%Y%m%d)-$(id -un).tar.xz -C ~/ .
to take a backup of their home directory and all its contents.  Because of the compression, it will be slow.  If you want it fast, omit compression (and sparse file detection):
    tar -cf $(date +%Y%m%d)-$(id -un).tar -C ~/ .

Try lz4 if you want a decent bit of compression and speed as well. Not as good compression as gzip or xz or bzip2 obviously, but a heck of a lot faster, and plenty enough to compress the blank sectors and text files.
Good idea; pity it hasn't been integrated yet into tar, and is not included in default installations.  (I had to install liblz4-tools to get the command-line utilities.)
    tar -cf archive.tar.lz4 -I 'lz4 -1' -C directory .
takes four seconds to compress, with compressed size about 80% of original (thus saving about 20% of space, with this kind of mixed binary/text content).
(At better compression rates, it takes 4.2 seconds on -2, 30 seconds on -3, with very modest increase in compression; -9 or maximum compression took 37 seconds, with still very modest increase in compression.)
As expected (because of LZ4 compression scheme), the decompression takes less than four seconds regardless of compression.

Thus, it is a very good idea to use lz4 compression instead of no compression at all. Just remember to install the necessary package first (from standard repository of your Linux distribution; package liblz4-tools in Debian derivatives). Because lz4 defaults to speed, to backup ones home directory one should use then
    tar -cf $(date +%Y%m%d)-$(id -un).tar.lz4 -I lz4 -C ~/ .
and to restore (after creating and moving to the home directory)
    tar -xf path-to-archive.tar.lz4 -I unlz4
and as shown, it will be blazingly fast.

Note that if you only want to extract a specific file or subdirectory, just list their relative paths (starting with ./) in the restore command.  I personally like to create an index,
    tar -tvf path-to-archive.tar.lz4 -I unlz4 > path-to-archive.list
so that one can use grep, less, more, or any text editor to look for specific files, and especially their timestamps, to see if a particular file is included in the archive.

Do note that if you will be storing the backups on particularly slow media, do consider using better compression (option 'z' for gzip, 'j' for bzip2, 'J' for xz), as the storage media read rate will be a bottleneck, and having higher compression means less data to transfer, and thus faster overall wall-clock time to restore.

Finally, if you have ample disk space, want the backup snapshot to take as little of your time as possible, but also want to compress it, you can create the tarball first, and then compress it afterwards (optionally niced down so compression is done only when the processor is otherwise idle enough), using
    archive=$(date +%Y%m%d)-$(id -un).tar
    tar -cf "$archive" -C ~/ .
after which you can continue using the files, and start compressing the archive using e.g.
    nice xz "$archive"


Let's say you are a GUI user, and you decide that the Backups directory in your home directory will contain only backup files (and will itself not be included in backups), you only need the following script:
Code: [Select]
#!/bin/bash

# Directory under home directory containing the backups
BACKUPDIR="Backups"

# Subtree of home directory to be backed up.  Use "." for entire home directory tree.
SUBTREE="."

# Archive base name (no extension)
ARCHIVE="$(date +%Y%m%d-%H%M%S)-$(id -un)"

# Compressor command and its extension to be used.  You can leave these empty.
COMPRESSOR="xz"
COMPRESSEXT=".xz"

# Quiet silly error messages from standard error
exec 2>/dev/null

# Make sure home directory exists and can be entered
cd || exit 1

# If it does not exist, create the Backups directory
mkdir -m 0700 -p ~/"$BACKUPDIR"
if ! cd ~/"$BACKUPDIR" ; then
    zenity --title "Backup" --error --no-wrap --text 'Cannot create "Backups" directory in your home directory!'
    exit 1
fi

# Tell the user to close all applications (so that the snapshot will be coherent):
zenity --title "Backup" --question --ok-label "Continue" --cancel-label="Cancel" --no-wrap --text $"Please close all applications, so that\nthe snapshot will be coherent." || exit

# Show a pulsating progress bar while creating the backup.
if ! ERRORS=$(exec 3>&1 ; tar -cf "$ARCHIVE.tar" --exclude "./$BACKUPDIR" -C ~/. "$SUBTREE" 2>&3 | zenity --title "Backup" --progress --pulsate --auto-close --no-cancel --text $"Creating backup $ARCHIVE ..." 2>/dev/null) ; then
    rm -f "$ARCHIVE.tar"
    zenity --title "Backup" --error --no-wrap --text $"Backup failed:\n$ERRORS"
    exit 1
fi

if [ -n "$COMPRESSOR" ]; then
    # Show a pulsating progress bar while compressing the backup.
    if ! ERRORS=$(exec 3>&1 ; $COMPRESSOR "$ARCHIVE.tar" 2>&3 | zenity --title "Backup" --progress --pulsate --auto-close --no-cancel --text $"Backup completed.\nFeel free to run applications.\nNow compressing the backup ..." 2>/dev/null) ; then
        rm -f "$ARCHIVE.tar$COMPRESSEXT"
        zenity --title "Backup" --error --no-wrap --text $"Backup failed.\nUncompressed backup $ARCHIVE.tar still exists.\n$ERRORS"
        exit 1
    fi
else
    COMPRESSEXT=""
fi

# Display an info dialog showing completion.
zenity --title "Backup" --info --no-wrap --text $"Backup $ARCHIVE.tar$COMPRESSEXT successfully generated."
exit 0
If you save it temporarily as say script, you can "install" it to your desktop by running
    install -m 0744 script ~/Desktop/Backup
It uses Zenity to provide GUI dialogs.

If we include the time it takes to mount and unmount a filesystem image, I do believe a script similar to above, using tar, will be less effort and less wall-clock time used.  (Note that closing all applications is not necessary; it is just there to remind users that if they take a backup while applications are modifying files, the backup may not have a coherent copy of the file.)

With a bit of effort, one could easily modify the above script (or convert it to Python3 + Qt5, for example), to detect if removable storage with a backup-y volume name is mounted, and offer to copy the compressed backup there; plus options to adjust compression, exclude directories, combine them into profiles, and so on.

The problem is, we'd really need to investigate first what kind of practices and use patterns would make this utility not only useful, but something that provides value to the user.  Most ordinary Linux users never take backups.  Advanced users who are more likely to take backups, use their own scripts.  So, the true problem at hand is not the code, but the usability of such tool.  Sure, it would be easy to optimize it for one specific user – say myself –, and publish it in the hopes that others might find it useful, but when we are talking about non-expert Linux users, such a tool should encourage better practices.  For example, it should be able to show the exact commands being run to the end user, so that if the user finds the tool not wieldy enough, they can adapt the commands to write their own scripts.

As always, this boils down to observing workflows and designing a better workflow, before implementing the tools.  I really, really hate the way proprietary applications and OSes impose a specific workflow on the user, and make them believe that the tool should describe the workflow to the user; whereas it is always the user who is wielding the tool, and responsible for the results.  Sure, some users just want something to click that should do an adequate job; but that is not what I want to facilitate.  If you are using Linux to accomplish things, you should be trying to find ways of accomplishing those things efficiently.  If not, you're better off using one of the proprietary OSes instead, as they do have a pattern for you to fit into.  I don't wanna, and I don't want to bring that sort of stuff to Linux either.  Others are free to do so, though.
 
The following users thanked this post: PlainName, DiTBho

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #109 on: December 06, 2020, 02:26:27 pm »
There is also a LZ4-Bindings for Python, see here (homepage) and here (github)

p.s.
Python is great, for scripting I am also going to learn Ruby, so I hope in 2021 I will know enough about 3 scripting-languages, in this order:
  • bash
  • python
  • ruby
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #110 on: December 06, 2020, 02:30:50 pm »
I have different sorts of backups for explicit data, but what I am specifically interested in implementing (well, having some app implement) is efficient disaster recovery. Whilst I accept that this method gets your data back, it's  not quick and it's just your data. It's not really suitable for bare metal restore.

What is the disaster you wish to recover from, and how do you use your machine overall?

Me, I don't want to do bare metal backups on my workstation, because I know I can install any Linux distribution in twenty-thirty minutes, and because of the rate of development, I know I'll want to switch distributions in the future.  Usually, I switch distributions only when I have multiple machines at hand, so I can try out various ones before deciding on which one.  Then, I can restore either my complete user profile, or only my files, in seconds.

If you need a High Availability workstation, you're better off having two, syncing the active one to the backup one.  That way you minimize your downtime.

If you do things that might mess up your storage device, but not the hardware in particular, use an external hard drive with a tarball or image of your operating system, and a separate partition for your user file tarballs.  (The example script I showed earlier can be trivially expanded to use Zenity to show a dialog of which backup(s) to remove, if the partition is say 75% full or more.  Or you can automate the entire thing, running of crontab, rotating the backups.)

You must understand that I am not denigrating your needs.  Quite the opposite, I applaud everyone who thinks about backups and how they utilize them.  It is good.

What I am concerned about, is that you are fixated on a backup workflow that works for you in Windows, and are insisting something similar to be used in Linux, and are using really odd'ish arguments why you insist on it.  I am saying that that pattern is not workable, unless you implement it yourself, or buy an implementation off someone.
(The facilities already exist: LVM does cover your needs.  What you would need to buy, is a high level, easy graphical interface.  I know server farms have them, but haven't used them myself; and I and others have written command-line versions for our own servers.  I am not aware of any GUIs intended for workstation use, but they might exist.)

Instead, you should think about what you need out of those backups.  Instead of focusing on full filesystem imaging, if the 20-30 minute downtime is too much for you to install a fresh version of your Linux distro, then make a clean OS image without any user files, and a separate partition for user file tarballs.  Trying to get full filesystem imaging to work for you in Linux like it does in Windows is doomed to failure; the OSes are that much different.  In particular, a full filesystem image will always contain all users home directories.  Perhaps you are the only user on your machine, but that is not true for all Linux workstations.
« Last Edit: December 06, 2020, 02:38:14 pm by Nominal Animal »
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: DOS vs. Linux
« Reply #111 on: December 06, 2020, 02:42:11 pm »
I was wondering if there are tools like xfs_copy for other filesystems and found that somebody wrote a program which can efficiently low-level copy a whole bunch of various filesystems.

https://partclone.org/

It uses some weird format by default, but its disk-to-disk mode (-b) can be abused to create mountable sparse file images :-+
 
The following users thanked this post: DiTBho

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #112 on: December 06, 2020, 05:11:45 pm »
After re-reading my own posts, I see my tone can be interpreted as hostile.  Not my intention; I may be a bit frustrated, but not hostile/angry/peeved.  Once again, me fail Eglish.
To fix, I shall try to outline how LVM2-based snapshotting and backups work, from an users perspective.

LVM2 is the Linux Logical Volume Manager.  (The 2 at the end is usually omitted, since the first version of LVM is ancient, and no longer used.)
It comprises of a group of userspace tools, and a library that interfaces to the Linux kernel Device Mapper, which does the actual work.

You have one or more hardware storage devices, with at least one partition on each.
If you are using software RAID, the md (multiple device) kernel driver will use those partitions, and export the RAID devices (/dev/mdN) as the partitions actually used.
LVM combines one or more of these partitions into one or more physical volumes. (Run sudo pvdisplay to see current physical volumes.)

One or more physical volumes – across hardware devices – are combined into a Volume Group.  You can have one or more Volume Groups.  (Run sudo vgdisplay to see these.  You may need to run sudo vgscan first, to scan all known LVM devices for Volume Groups first, especially if you just attached external media.  It can take a bit of time.)

Each Volume Group contains one or more Logical Volumes.  Each logical volume is seen as a partition, and contains a filesystem or swap space, is used as a raw partition for some kind of experiments, or is unused. (Run sudo lvdisplay to see these.)  Logical Volumes can be resized within a Volume Group.

Do not use all space in a Volume Group for Logical Volumes.  Snapshots use up some space in the Volume Group.

When Logical Volumes are mounted, the device path (as shown in e.g. df -h or du -hs or mount) will show /dev/mapper/VolumeGroup-LogicalVolume as the device.

To take a snapshot of a Logical Volume, you create a new logical volume using lvcreate (8):
    sudo lvcreate -s VolumeGroup/LogicalVolume -L size -n NewLogicalVolume
This is atomic, and takes an exact snapshot of that Logical Volume at that point, and is okay to do even when it is mounted and active.  You'll want to ensure that applications are not writing to files in that Logical Volume right then, or the files will obviously be in an intermediate state.  (There are tricks on how to check if any writes have occurred after the snapshot, most being heuristic in the sense that they can give false positives, but not false negatives; quite reliable.)

Then, you can either mount the new Logical Volume (/dev/mapper/VolumeGroup-NewLogicalVolume, but with any dashes in NewLogicalVolume doubled; there is also copy-on-write /dev/mapper/VolumeGroup-NewLogicalVolume-cow device) and copy/compress/use its contents at your leisure, or save the image.

For storing the image on an external storage device, I recommend creating a partition the exact same size as the Logical Volume.  Since the partition will be a normal filesystem, it is usually auto-mounted when connected.  To update the backup, you can use the volume label to find the device corresponding to the partition:
    label="foo" ; user=$(id -un) ; dev=$(LANG=C LC_ALL=C mount | awk '$2 == "on" && $3 == "'"/media/$user/$label"'" { print $1 }')
If current user has media with volume label foo mounted, its corresponding device will be in shell variable dev; otherwise the shell variable will be empty.
Then, unmount the device using
    umount $dev
and copy the partition image over using e.g.
    sudo dd if=/dev/mapper/VolumeGroup-NewLogicalVolume of="$dev" oconv=fdatasync,nocreat bs=1048576
Note that the device name is inserted by the current shell to the sudo command.
Finally, remove the no longer needed Logical Volume,
    sudo lvremove -y VolumeGroup/NewLogicalVolume
and the snapshot is done.

To restore an existing image, find the device corresponding to the partition (again as $dev).  Then, unmount the Logical Volume,
    sudo umount /dev/mapper/VolumeGroup-LogicalVolume
restore the image,
    sudo dd if="$dev" of=/dev/mapper/VolumeGroup-LogicalVolume oconv=fdatasync,nocreat bs=1048576
and remount the Logical Volume,
    sudo mount /dev/mapper/VolumeGroup-LogicalVolume

That's the short version.

As you can surely see, there are many different ways of automating this.  A simple udev rule and a shell script is could be used to prompt you when an external backup media is attached whether you want to back up current Logical Volumes to that media.  For restoring, you'll want a command-line command (it's useful to add status=progress to dd commands in that case), since you might be in emergency command line, and in any case you'll want to unmount the Logical Volume (making GUI use, uh, difficult – workarounds exist, of course) for the restoration.

As to why I don't think this is a workable pattern for workstations, is that in my opinion, workstations are volatile and modular.  A monolithic backup is too rigid.

If you partition your system so that /home is a separate Logical Volume, then image-based backups become viable.  However, even then, I believe that instead of dd'ing the image itself, we should use tar (with or without compression, possibly niced and/or ioniced) to compress the filesystem contents to the external media.  That way, the external media can contain multiple backups.  With an index file (text file describing the file list inside the tar archive), one can trivially restore individual files or subdirectories, which can come in handy.

If the OS is on a separate Logical Volume or Volumes, then taking a snapshot after a clean install makes a lot of sense.  When tar'ing the user files, you can also save the currently installed package list, output of
    LANG=C LC_ALL=C dpkg -l | awk '$1 == "ii" { print $2, $3 }' > packages.list
so you can quickly upgrade an older image to newer by installing the difference in the packages – or if you get some sort of conflicts after installing a package, and purging the package doesn't fix it, you can compare incidental changes in the packages/versions.
Let's say packages.list contains the desired package list.  Then, running
    LANG=C LC_ALL=C dpkg -l | awk -v src=packages.list 'BEGIN { while (getline) { if ($1=="ii") pkg[$2] = $3 } while (getline < src) { if ($1 in pkg) { if (pkg[$1] != $2) printf "Update: %s from %s to %s\n", $1, pkg[$1], $2 } else { printf "Install: %s %s\n", $1, $2 } } }'
gives you a summary of which packages need updating/downgrading and which packages need installing.
This, of course, could be automated in the backup user interface.

A simple GUI utility would make a lot of assumptions on how the system is partitioned, that this is essentially a single-human-user system, and so on.  Not a problem if you write your own GUI, but a problem when considering more general use backup tool.
« Last Edit: December 06, 2020, 05:18:00 pm by Nominal Animal »
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: DOS vs. Linux
« Reply #113 on: December 06, 2020, 08:10:30 pm »
Try lz4 if you want a decent bit of compression and speed as well.
There is also zstd. libarchive version suports zstd since 3.4.3, GNU tar has zstd since 1.31.

Python is great, for scripting I am also going to learn Ruby, so I hope in 2021 I will know enough about 3 scripting-languages, in this order:
Worth noting that they are not equivalent in terms of what they can be used for. E.g. bash is a shell and while it has great scripting capabilities, don’t get carried away while using it. Just because you can doesn’t mean you should. :D

People create amazing pieces of art like bashtop, but just like a 3D shooter written in gawk they should remain in the domain of art. ;)
People imagine AI as T1000. What we got so far is glorified T9.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #114 on: December 06, 2020, 09:49:12 pm »
Try lz4 if you want a decent bit of compression and speed as well.
There is also zstd. libarchive version suports zstd since 3.4.3, GNU tar has zstd since 1.31.
I'm still running tar 1.29, and didn't think of checking the upstream man 1 tar manpage. Oops... :-[

Worth noting that [those scripting languages] are not equivalent in terms of what they can be used for. E.g. bash is a shell and while it has great scripting capabilities, don’t get carried away while using it. Just because you can doesn’t mean you should. :D
Fully agreed.

For the topic at hand, here are my typical use cases for different scripting languages:
  • bash: Shell scripts, automation.
    In rare cases, I use zenity to provide prompts or progress bars.  More commonly, just notify-send to inform of task completion.
  • python3: Graphical user interfaces, format conversion.
    Python has very nice built-in libraries to handle e.g. CSV format, plus lots of modules for e.g. spreadsheet formats.
  • awk: Text-format data collation and summarizing.
    Awk is designed for record and field-based data, and has fast associative arrays, and the syntax is relatively simple.
    A good example of awk snippets I use is the snippet a couple of messages earlier that summarizes changes needed from currently installed Debian packages to a stored package list.
    I also have a script named c written in bash and awk, for quick numerical calculations on the command line; like bc, but simpler.
For server-side scripts to be used on web pages, I've used both PHP and Python.  I've written more stuff in PHP over the decades, but I do like Python more.
I have also written security-sensitive web scripting stuff in C, but only when required for proper privilege separation.



There are two patterns I'd like to see used more when doing shell scripting.

First is automagically removed temporary work directories:
Code: [Select]
#!/bin/bash
Work="$(mktemp -d)" || exit 1
trap "cd /tmp ; rm -rf '$Work'" EXIT

# Put temporary files in "$Work"
The exit trap means the temporary directory is removed whenever the script exits, even if interrupted by Ctrl+C.  The path to the work directory, $Work, is evaluated when the trap is set, so even if you change "$Work" afterwards, the original temporary directory will be removed.  The temporary directory will be under /tmp, or under whatever directory is set in environment variable $TMPDIR.  The reason for cd /tmp is that if the working directory is in the temporary directory, deleting the tree could fail: changing to any directory known to be outside works here, and since the working directory is irrelevant when the script exists, it is completely safe to do here.  I like to use cd /tmp to ease my OCD/paranoia; it's a relatively harmless place to run rm -rf in.

The second is passing paths safely from a find operation.
To execute a command for each file in the current directory and any subdirectories:
Code: [Select]
find . -type f -exec command '{}' ';'The single quotes ensure the special parameters, {} for the file name, and ; for the end of the command, are passed as parameters to find, and not interpreted by the shell as special tokens.
To run a command allowing multiple file names per command:
Code: [Select]
find . -type f -print0 | xargs -r0 commandTo pass the file names to Awk, use
Code: [Select]
find . -type f -print0 | awk 'BEGIN { RS="\0" ; FS="\0" } { path=$0; /* use path */ }'To pass the file names to a Bash while loop, use
Code: [Select]
IFS=$'\0'
find . type f -print0 | while read -rd $'\0' filepath ; do
    # use "$filepath"
done
or, if you don't mind running the while loop in a subshell,
Code: [Select]
find . -type f -print0 | ( IFS=$'\0' ; while read -rd $'\0' filepath ; do
    # use "$filepath"
done )
The latter is preferred, since IFS is the Bash variable that controls word splitting.  Running the while loop in a subshell means word splitting in the parent shell is unaffected.

All these use ASCII NUL (zero byte) as the separator, and work without issues with all possible file names in Linux.
In Linux, file and directory names are opaque byte strings where only NUL ('\0', zero) and / ('/', 47) are not allowed.  You can have a file named newline, for example: touch $'\n' in Bash.

Many new script-writers assume whitespace does not occur in filenames.  That is fixed by putting the file name reference in double quotes.  But what about file names that have a newline or some other control character in them, including ASCII BEL?  Many Perl utilities fail with those, for basically no reason, causing all sorts of annoyances when you encounter filenames with odd characters in them.

I do like to additionally use export LANG=C LC_ALL=C in my shell scripts parsing the output of utility commands, because the locale can affect their output.  The C locale is the untranslated, raw locale.

To pass the file names with last modification date and time, and size in bytes, to a Bash script, I use
Code: [Select]
find . -type f -printf '%TY-%Tm-%Td %TT %s %p\0' | ( IFS=$'\0' ; while read -rd $'\0' filepath ; do
    filedate="${filepath%% *}" ; filepath="${filepath#* }"
    filetime="${filepath%% *}" ; filepath="${filepath#* }"
    filesize="${filepath%% *}" ; filepath="${filepath#* }"
    # use $filedate, $filetime, $filesize, and "$filepath"
done )
To an awk script,
Code: [Select]
find . -type f -printf '%TY-%Tm-%Td %TT %s %p\0' |  awk 'BEGIN { RS="\0" ; FS=" " }
      { filedate=$1 ; filetime=$2 ; filesize=$3 ; filepath=$0 ; sub(/^[^ ]* [^ ]* [^ ]* /, "", filepath);
        /* use filedate, filetime, filesize, and filepath */
      }'

Note: I do not expect a beginner to know the above.  I only wish they were more easily found, amid all the crap, when a beginner decides they want to do something like the above.  The only "trick" here is in the last awk snippet, using a regular expression to remove the other fields from the path using the entire record ($0 in awk), but using the standard field separation for the normal fields.  And perhaps the Bash and POSIX shell syntax for manipulating strings (${variable#pattern}, ${variable##pattern}, ${variable%pattern}, and ${variable%%pattern}).
For lack of a better place to put these examples, I posted them here.
« Last Edit: December 06, 2020, 09:53:42 pm by Nominal Animal »
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #115 on: December 06, 2020, 10:28:09 pm »
I have different sorts of backups for explicit data, but what I am specifically interested in implementing (well, having some app implement) is efficient disaster recovery. Whilst I accept that this method gets your data back, it's  not quick and it's just your data. It's not really suitable for bare metal restore.

What is the disaster you wish to recover from, and how do you use your machine overall?

Good question! And the answer is... I don't know. There is the obvious catastrophic disk loss (sadly, now more likely without warning than it used to be). However, a few times recently I've elected to recover from some situation (crap install of something, perhaps) by just restoring the system drive. Also a few times recently, recovering single files that I've accidentally, er, mislaid (in fact, I did this yesterday).

I think tailoring to a specific type of disaster might well be asking for trouble. A disk image covers pretty much any restore requirement, from 'oops, didn't meant to press that key' through to infrastructure flattening. Worst case, where even replacement hardware isn't available, booting it into a VM is pretty simple.

Note that my backups are disk images as a file. I've done the tapes and disk clones and all that stuff. Wouldn't consider any of them appropriate.

Not sure what you mean about using the machine overall. It is my primary machine so I do most things on it.

Quote
Me, I don't want to do bare metal backups on my workstation, because I know I can install any Linux distribution in twenty-thirty minutes, and because of the rate of development, I know I'll want to switch distributions in the future.

I don't want to do any restore at all either! Nevertheless, in 20-30 minutes I can be recovered from a disaster (accepting the hopefully lesser disaster of missing the data since the last backup) and be on my way again. You've essentially got to a position where you can start dropping your data onto your new OS - I accept that it would take me longer to go that route (which is one reason I wouldn't consider it on Windows), but there is a deeper thing to: I have maybe 200-300 apps installed and each is likely set up in a way that suits me.

When I get a new machine I tend to start again from scratch: install a new copy of the OS, install each app only as I need it. Nine months to a year down the line I am still probably dicking around setting up things the way I want, and referring to the backup of the old system to recover various settings and data. You can see the attraction of slapping in a disk and coming back a half-hour later to a system that doesn't need any of that.

Quote
If you need a High Availability workstation, you're better off having two, syncing the active one to the backup one.

That's a thought, but not appropriate - it's basically RAID on a machine level, so I'd still need backups.

Quote
You must understand that I am not denigrating your needs.  Quite the opposite, I applaud everyone who thinks about backups and how they utilize them.

I understand that and appreciate the effort.

Quote
you are fixated on a backup workflow that works for you in Windows, and are insisting something similar to be used in Linu

Yes, entirely likely. However, it's something that's evolved over many years, if not decades, and which I've found to work reliably. Since no-one using Linux seems to do the same, there is clearly something I am missing. Nevertheless, I note that the typical last resort backup option is similar to what I do - look in the RPi forum, for instance, and backing up a system there is basically making a copy of the SDcard.
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #116 on: December 06, 2020, 10:39:00 pm »
Quote
you are fixated on a backup workflow that works for you in Windows

Actually, yes, that is a big influence. Essentially because you have to do an image if you want to backup the system drive and end up with a bootable recovery (file-and-folder isn't good enough). So in that sense, Windows demands an image, at least for that drive.

However, I think that just points to the most recoverable technique. Linux may not have such a requirement, but if all else fails then a disk image is as good as you could ever get. Starting from that point gets you to wherever you need to go, whereas starting from a more 'reasonable' point might not.

Another way to look at it is that an image is an offline (in the disk sense) RAID copy. You're just syncing the array periodically instead of in real time.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #117 on: December 07, 2020, 01:48:44 am »
php

I am learning PhP mostly because it can be invoked by Webservers like Apache; I do find that PhP is a very powerful  language that can also be used to script something, but I haven't yet understood how to properly debug sources.

How do you manage it?

Ruby looks also interesting, powerful and it's easier to be debugged. But I am not sure about that, just my first feeling about them based on some simple tutorials.
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #118 on: December 07, 2020, 02:38:51 am »
How do you [debug PHP]?
You do not.  (No, I'm not kidding.)

PHP is a very dangerous language.  It used to be even more dangerous, what with Magic Quotes being enabled by default (but fortunately completely removed in 5.4.0, released in 2012).

You write the code in small sections, unit testing each part.  Then you incrementally combine the parts, redoing the unit tests, and also testing the combined code.  I use highly aggressive test inputs (containing embedded zeros, quotes, newlines of different formats, invalid UTF-8 sequences) for all form-processing code; basically stuff that not even skript kiddies have thought of yet.

Validating and vetting existing PHP code is hellish.  You essentially have to rip it apart, and rewrite the functionality yourself, then compare the results to the original code, to be sure your understanding, the original developers' understanding, and the code all match.

A large part of the problem is that PHP has several ways of doing the same thing.  It has objects and methods, but also a huge library of non-OO functions by default.  Many PHP programmers mix these.  Unless you have the same background as the original developer(s), you'll be constantly wondering why they did stuff that way, when this way would have been shorter/easier/more robust.  That sort of thing gets on ones nerves, especially if you have a OCD/paranoid streak, which in my opinion is a prerequisite for anyone vetting security-sensitive code.  (And all code handling user passwords or personal information is security-sensitive to me.)

When you discover the PHP command-line interpreter, you might think that hey, I can run my code under that, and gdb the process, with Python accessors to make sense of the PHP data structures and such, which would work, except that the command-line interpreter is not the same environment as the Apache module or the FastCGI interpreter.  In Debian derivatives, they're even provided by separate packages, which means they may not even be the exact same version.
 
The following users thanked this post: DiTBho

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #119 on: December 07, 2020, 04:00:26 am »
Ruby looks also interesting, powerful and it's easier to be debugged.

I like Ruby much more than Python. It feels as if it was actually designed. Also, it has many of the goodies from Perl, but in a language with proper data structures and functions that is usable for programs longer than 10 lines.

The Python implementation is a bit faster than Ruby, but if that matters then you should probably be using C or Swift or Go or Rust anyway.
 
The following users thanked this post: DiTBho

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #120 on: December 07, 2020, 07:20:21 am »
There is still room for a new, efficient scripting language out there, that avoids Python's pitfalls, has proper thread support, but similar C/C++ interfacing capability from the get go; and if possible, as easy to embed as Lua.  As Python has shown, a comprehensive standard library is a necessity.

There are all these "new and advanced" programming languages implementing complex new abstractions and touting new computing science innovations, but nobody is willing to do the real grunt work and really find out what works in existing languages, makes them powerful, or weak, and look at the existing codebases to see what sort of features are used; and to take all the good things, and try to avoid the bad things, to design something provably better than existing languages.  I mean, brute force objective analysis, forgetting about ones own preferences. It's nothing glamorous, just hard grunt work and lots of it, but the results would beat the new-language-du-jour by a mile, methinks.

We don't seem to appreciate that sort of design-to-last mentality anymore :'(.
« Last Edit: December 07, 2020, 07:21:56 am by Nominal Animal »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #121 on: December 07, 2020, 11:05:05 am »
People *have* designed such languages, but the problem is they don't get any traction unless they find some "killer app", in which case the shittiest language in the world (PHP, JavaScript...) can take off.
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #122 on: December 07, 2020, 11:18:05 am »
Yesterday I found a very nice book, available as PDF, epub e-readers, and mobi for Kindle readers.
It's priced $18.50 (USD) and can be ordered here
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #123 on: December 07, 2020, 11:58:38 am »
There is still room for a new, efficient scripting language out there, that avoids Python's pitfalls, has proper thread support, but similar C/C++ interfacing capability from the get go; and if possible, as easy to embed as Lua.  As Python has shown, a comprehensive standard library is a necessity.
(.)

Actually  such lang. already exists.

It has been powering things under the hood for decades.

Incredible powerful light - properly written by people who
actually do the things properly.. and not by hidden interest..

Such lang. is frequently mocked like all proper NIX environ..
with those funky labels...  mostly orienting "new" crappy
untested solutions... for retarded goonies..

Such lang. is called PERL.

Paul
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #124 on: December 07, 2020, 12:46:10 pm »


What I need:
  • Automation Tasks
  • Web programming

Umm, Perl? I think I will add it to my list  :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #125 on: December 07, 2020, 01:40:09 pm »


What I need:
  • Automation Tasks
  • Web programming

Umm, Perl? I think I will add it to my list  :D

Rest assure that PERL  fits 99% of day tasks and

EVERY SINGLE RESPONSIBLE SYS ADMIN ...
sleeps with a PERL cook book under the pillow..

PERL will suit 100% fine from simple script
to complex object oriented GUI apps

Check lately CPAN stats how much folks actually
are working daily  restless since hmmm 80s? 90s?

While the other side of the coin has done the impossible
to wipe out PERL strong capabilities from earth...

It is still there..

In as much I personally don't like the recent PERL 6
trends ...  diverting from the real strong and secure paradigms.

There is no coming back once you master PERL at least reasonably

Enjoy the real strong points of the free  Internet...
Paul


PS> several times a question considering REPLACING BASH w/PERL
has crossed my eyes... and I do think that will be a nice thing...
« Last Edit: December 07, 2020, 01:42:21 pm by PKTKS »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #126 on: December 07, 2020, 05:00:47 pm »
People *have* designed such languages, but the problem is they don't get any traction unless they find some "killer app", in which case the shittiest language in the world (PHP, JavaScript...) can take off.
I haven't seen any.  Every one I've seen has either historical baggage, or serious signs of this-is-my-favourite-fad-and-I'll-keep-it-in-itis.

Rest assure that PERL  fits 99% of day tasks
Ha-ha, that's funny. :palm:

Perl is Yet Another language that has not evolved but aggregated more and more.  Having both OO and non-OO approaches in the standard library, and an obtuse syntax to boot, is a long-term maintenance problem.  Have you ever looked at long-maintained Perl code?  It is almost as bad as PHP.

And like I said earlier, I find it a fatal flaw that perl-based "standard" tools (like /usr/bin/rename provided by File::Rename) cannot handle file names with newlines (provided from standard input).

You may like it, but I don't.  I've been bitten by its flaws too many times.  Also, I still have flashbacks to some really horrible Perl code I had to try and maintain on a server just before the turn of the century.  It has its uses, but those are very rare; I keep it in the same category as awk.
 
The following users thanked this post: DiTBho

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #127 on: December 07, 2020, 10:48:13 pm »
Quote
cannot handle file names with newlines

I never realised that anything could, or should! Is there an actual OS that does?
 
The following users thanked this post: Ed.Kloonk

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #128 on: December 07, 2020, 11:07:12 pm »
You can create a file named foo<newline>bar in Bash by running touch $'foo\nbar'.

Like I wrote above, the Linux kernel considers file names (directory entries) to be opaque byte sequences terminated with '\0' and not containing '/' .
The $'foo\nbar' is just a way Bash allows one to express the name.

You can also do
Code: [Select]
filename="$( printf 'foo\nbar' )"
touch "$filename"
or even
Code: [Select]
touch 'foo
bar'
to accomplish the same.  Works just fine in Linux.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4036
  • Country: nz
Re: DOS vs. Linux
« Reply #129 on: December 07, 2020, 11:23:31 pm »
Quote
cannot handle file names with newlines

I never realised that anything could, or should! Is there an actual OS that does?

Any decent one, I should think!

Code: [Select]
bruce@rip:~/temp$ l
total 32
-rwxr-xr-x  1 bruce bruce 16408 Dec  6 16:01 hello
drwxr-xr-x  6 bruce bruce  4096 Dec  6 15:58 _install
drwxr-xr-x 17 bruce bruce  4096 Dec  6 15:28 riscv-isa-sim
drwxr-xr-x 11 bruce bruce  4096 Dec  6 15:37 riscv-pk
bruce@rip:~/temp$ echo foo >'line 1\
> line 2'
bruce@rip:~/temp$ l
total 36
-rwxr-xr-x  1 bruce bruce 16408 Dec  6 16:01  hello
drwxr-xr-x  6 bruce bruce  4096 Dec  6 15:58  _install
-rw-r--r--  1 bruce bruce     4 Dec  8 11:56 'line 1\'$'\n''line 2'
drwxr-xr-x 17 bruce bruce  4096 Dec  6 15:28  riscv-isa-sim
drwxr-xr-x 11 bruce bruce  4096 Dec  6 15:37  riscv-pk
bruce@rip:~/temp$

And viewed in emacs diredit mode:

Code: [Select]
  /home/bruce/temp:                                                                                           
  total used in directory 44 available 368830636                                                             
  -rwxr-xr-x  1 bruce bruce 16408 Dec  6 16:01 hello                                                         
  drwxr-xr-x  6 bruce bruce  4096 Dec  6 15:58 _install                                                       
  -rw-r--r--  1 bruce bruce     4 Dec  8 11:56 line 1\\                                                       
line 2                                                                                                       
  drwxr-xr-x 17 bruce bruce  4096 Dec  6 15:28 riscv-isa-sim                                                 
  drwxr-xr-x 11 bruce bruce  4096 Dec  6 15:37 riscv-pk                                                       


OK, that was Ubuntu. Let's try MacOS:

Code: [Select]
bruce@mini:~/trv$ l
total 168
-rw-r--r--  1 bruce  staff   2703  6 Dec 23:17 instructions.inc
-rwxr-xr-x  1 bruce  staff  68112  6 Dec 23:37 trv
-rw-r--r--  1 bruce  staff   8340  6 Dec 23:37 trv.c
bruce@mini:~/trv$ echo foo >'line1\
> line2'
bruce@mini:~/trv$ l
total 176
-rw-r--r--  1 bruce  staff   2703  6 Dec 23:17 instructions.inc
-rw-r--r--  1 bruce  staff      4  8 Dec 12:01 line1\?line2
-rwxr-xr-x  1 bruce  staff  68112  6 Dec 23:37 trv
-rw-r--r--  1 bruce  staff   8340  6 Dec 23:37 trv.c
bruce@mini:~/trv$

And, again, viewed in emacs diredit:

Code: [Select]
  /Users/bruce/trv:                                                                                         
  total used in directory 176 available 103 GiB                                                             
  drwxr-xr-x   7 bruce  staff    224  8 Dec 12:01 .                                                         
  drwxr-xr-x+ 41 bruce  staff   1312  6 Dec 23:21 ..                                                         
  drwxr-xr-x  12 bruce  staff    384  6 Dec 23:47 .git                                                       
  -rw-r--r--   1 bruce  staff   2703  6 Dec 23:17 instructions.inc                                           
  -rw-r--r--   1 bruce  staff      4  8 Dec 12:01 line1\\                                                   
  line2                                                                                                     
  -rwxr-xr-x   1 bruce  staff  68112  6 Dec 23:37 trv                                                       
  -rw-r--r--   1 bruce  staff   8340  6 Dec 23:37 trv.c                                                     

Running the following trivial program:

Code: [Select]
#include <stdio.h>
#include <dirent.h>

void main() {
    DIR *dp = opendir(".");
    struct dirent *files;
    while(files = readdir(dp))
      printf("Found file: '%s'\n", files->d_name);
}

On Linux:

Code: [Select]
bruce@rip:~/temp$ ./dir
Found file: '_install'
Found file: '..'
Found file: 'riscv-pk'
Found file: 'riscv-isa-sim'
Found file: 'hello'
Found file: '.'
Found file: 'dir.c'
Found file: 'line 1\
line2'
Found file: 'dir'
bruce@rip:~/temp$

On MacOS:

Code: [Select]
bruce@mini:~/trv$ ./dir
Found file: '.'
Found file: '..'
Found file: 'instructions.inc'
Found file: 'dir.c'
Found file: 'line1\
line2'
Found file: 'trv'
Found file: 'trv.c'
Found file: 'dir'
Found file: '.git'
bruce@mini:~/trv$

So it seems the '\' got included in the real file name as well as the newline. So it probably wasn't necessary. But anyway, both Linux and MacOS happily allow you to create files with newlines in their names.
 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #130 on: December 08, 2020, 04:52:14 am »
Well, you live and learn! Thanks, both  :-+
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: DOS vs. Linux
« Reply #131 on: December 08, 2020, 06:10:51 am »
Code: [Select]
-rw-r--r--  1 bruce bruce     4 Dec  8 11:56 'line 1\'$'\n''line 2'
GNU ls :--
Who even thought it's a good idea :rant:

Code: [Select]
# grep QUOTING_STYLE .bashrc
export QUOTING_STYLE=literal

So it seems the '\' got included in the real file name as well as the newline. So it probably wasn't necessary.
Indeed. Immediately wanted to point it out.
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #132 on: December 08, 2020, 01:56:49 pm »
(..)
So it seems the '\' got included in the real file name as well as the newline. So it probably wasn't necessary.


Well  I will avoid deviating the point as much as strictly needed.

I deal with programming for enough time to say certain things
with pretty much certain:
- I am above average in PERL skills but I am not even close to the experts
- It took me quite a while to master the mid level necessary skills
- quite a while being: I deal with PERL since early 90s.

The above comments actually are in the 0.1%  case of EXOTIC and
PATOLOGIC idiosyncrasies

I treat them like  gross errors (inserting newlines into file system names)
and wipe them out easily with a bunch of others cross-coded chars that
can not be mapped among different systems.

Trivial fix for such error:  s/\n//g;   # done

PERL is a context sensitive logical entity parser...
it has embed pre defined types for entity record definition

Not so easy when database feeds (and PERL handles pretty much
all of them)  contains nasty currency chars combined with non
mapped utf-8 ..   a hell of a headache ...

i have an entire library with hundreds of such exotic
fixes  with are not directly feedable into a db stream...

as anyone will recall data type mismatches will reject them all
Please do not allow gross errors like that cloud your judgment

The characters  in question are not mapped here inside the
text.. and neither into several other systems

So a figure of that stupid but highly frequent problem is attached.
I have more than dozen hundred lines of fixes...

Paul

« Last Edit: December 08, 2020, 02:19:24 pm by PKTKS »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #133 on: December 08, 2020, 02:36:51 pm »
GNU ls :--
Who even thought it's a good idea :rant:
But that's the entire reason for my (very slight) frustration in this thread.

There are two approaches.
1. Allow only that which is necessary or obviously useful.
2. Deny only that which is harmful or problematic.

These are completely separate paradigms.
First one is important in security-sensitive contexts.  Second one is important if you want to maximize the options for the end user.
Getting people to understand that while neither approach is superior, they are completely different, and realizing exactly how they differ, and what the ensuing effects of applying one or the other, is the frustrating bit.

- quite a while being: I deal with PERL since early 90s.
Me too.  But, I've never understood you one-tool people; I prefer to use the tool that fits my workflow, instead of venerating one above all others.
(For me, that applies to everything: software, programming language, OS, even license used for my own work product.)

Which is basically what I was saying about scripting languages, too.  All existing ones I know have obvious flaws, and instead of doing the hard work to integrate the good parts into a new language, all I see in new languages is new abstractions and computer science hip topic du jour being applied.  (I do tend to spend some time reading publications at ACM.org, I'm not bashing CS in general.  And I do consider software engineering to be more important than CS; I guess I see a lot of CS in new programming languages, but very little SE.)

Such a combination would obviously not replace all other scripting languages, but for any of the cases I use scripting languages now, one would for sure be an improvement.
 
The following users thanked this post: DiTBho

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #134 on: December 08, 2020, 02:44:57 pm »
Quote
Trivial fix for such error:  s/\n//g;   # done

Such trivial search and replace efforts tend to find and replace stuff you didn't want changing. Really needs to be context-sensitive search, which I don't believe 's' is. Maybe without the 'g' so you can check each instance, eh?

Good job your perl skills are better than this trivial fix ;)
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #135 on: December 08, 2020, 02:59:27 pm »
Quote
Trivial fix for such error:  s/\n//g;   # done

Such trivial search and replace efforts tend to find and replace stuff you didn't want changing. Really needs to be context-sensitive search, which I don't believe 's' is. Maybe without the 'g' so you can check each instance, eh?

Good job your perl skills are better than this trivial fix ;)


My dear... I will reply for the sake of completion...

Of course  as said above.. it matters on what context we
are dealing with..

For the sake of such stupid errors
$/ =""  is equivalent to $/ = "\n\n"   

and if $/ is assigned the empty string  Perl
assumes itself to be in a paragraph read mode.

That depends a lot on your goals..
that stupid newline inside file names suffice that trivial trick
instead you *may*  use paragraph modes.

Or more complex targets as PERL does have more ..
if you prefer
@lines = split( /\W*\n+\W*/, $_ );
@words = split( /\W*\s+\W*/, $_ );


Paul
« Last Edit: December 08, 2020, 03:01:42 pm by PKTKS »
 

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
Re: DOS vs. Linux
« Reply #136 on: December 09, 2020, 01:45:41 am »
Here is an example of not understanding what is going on.

I ran the Ubuntu updates earlier and got these 'error' messages. The final message stated the updates have been installed, but why did it have errors during the process?
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #137 on: December 09, 2020, 10:56:39 am »
Here is an example of not understanding what is going on.

I ran the Ubuntu updates earlier and got these 'error' messages. The final message stated the updates have been installed, but why did it have errors during the process?

The BUNTU  things are made to leverage absolute newbies
some steps in the learning curve...

But definitively after that any reasonable clever newbie
should ditch that automated crappy scripts in favor
of keep climbing the proper learning curve..

otherwise will be stick  in that:  "see ma... just like windows does.."

Libmikmod contains a config parser which is generating
that "warning"  about some weird error  introduced in the files
listed by the messages...

I manage my scratch system totally free of buntus and systemd
and crappy shit..  using a SIMPLE PACKAGE MANGER written in PERL..

Ohhh  really ? yes simple as it should 100% working integrated
with ncurses Qt and Gtk (thanks to PERL)

Here the shot of mikmod using hte ncurses front end.. in PERL

Your system should contain such items and a man page for that

Paul
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #138 on: December 09, 2020, 03:09:55 pm »
oops    fix the mismatched package

instead  this is KMOD not MIKMOD library

The error most likely is some sort of "blacked list item"

which in debian is very common method but no in systemV
where you only do things with 100% sure..

KMOD is not MIKMOD  sorry

Paul
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #139 on: December 09, 2020, 04:25:38 pm »
I ran the Ubuntu updates earlier and got these 'error' messages. The final message stated the updates have been installed, but why did it have errors during the process?
The way to understanding starts from understanding how error messages are formed.  Here is the first line of the error, or at least the part that I can see from the image (the really interesting part was cut off):

libkmod: ERROR ../libkmod/libkmod-config.c:656 kmod_config_parse:  ... .d/blacklist-ideapad.conf line 6: ignoring bad line starting with ...

The first part in green tells us what process or service is reporting.  Here, it is libkmod, a C interface for applications to load and unload Linux kernel modules. (Why have that? I don't have that installed!  Applications should not be able to do that, only specialized, privileged user interfaces on behalf of the administrator(s), and those use modprobe/insmod/rmmod like they're supposed to.)

The rest of the line is the actual error message.  The black part tells us it was produced by libkmod/libkmod-config.c line 656, but this is kinda odd; only asserts (that if fail, indicate a programming error) usually do this.  The blue and red parts are the more commonly seen parts, describing where the error occurred, and what the error is, respectively.

Because part of the error was cut off, I cannot be sure, but I'd wager the missing blue part is /etc/modprobe.d/blacklist-ideapad.conf , which is a modprobe configuration file controlling how some kernel modules are loaded (or not) regarding ideapad devices.  Looking at the libkmod/libkmod-config.c:kmod_config_parse() function, I'd wager the problem is that the modules blacklisted by that file do not contain any underscores.  (Because libkmod considers blacklisting drivers – using blacklist drivername – an error.  Which is obviously incorrect, but hey, that code is utter crap anyway.)

So, let's recap.  During installation, you installed or some package you installed pulled in libkmod.  During some trigger (pre- or post-installation, when package install scripts can run things to verify everything is ok, or to set up the package itself) libkmod complained that it must skip some configuration entries in /etc/modprobe.d/blacklist-ideapad.conf because it could not parse them correctly.

How dangerous is this?  Not at all.  It does mean that libkmod is not usable with kernel modules related to ideapad devices, because it cannot parse the modprobe configuration file.

What should you do about it?  Remove (purge!) libkmod, because it is crap.  Whatever applications need it, are crap also.  modprobe is provided by kmod, the official tool package in Debian derivatives for managing Linux kernel mods.  libkmod that you have installed, is having trouble with its configuration files, and does parallel stuff, so the problem is solely in libkmod.  The best approach to resolving such things is removing the inferior software and all its dependants (libkmod, and all packages depending on it), and add it and its developers to your shitlist.  Whenever you see new stuff, you check your shitlist first, because bad programmers that leave such errors languishing for months with zero action even on the git repository will never improve.

Others are much more gentle with their approach, and will not maintain such a shitlist.  I do, because I refuse to waste time and effort on the work product of those who are happy to waste others' time and effort.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #140 on: December 09, 2020, 05:07:18 pm »
:wtf: Aww fuck, Lucas De Marchi seems to be the current kmod maintainer, and is adding libkmod to 'kmod' package in more recent versions.

Just look at this bug fix for example: De Marchi caused a problem himself, that now can't be fixed because it is part of the userspace ABI.

(Edited: I had a suspicion as to why this interface was added, and looks like my suspicion was right.  Yes, systemd will apparently eventually do kernel module probing/loading for you.  I wonder how long it will be before they announce modprobe/insmod/rmmod are deprecated in favour of systemd-kmod or somesuch.)

Oh how I hate programmers who just keep aggregating things because they can, instead of taking care their work product is actually usable. :rant:

Bostonman, if you want to avoid that error in the future, you have to edit /etc/modprobe.d/blacklist-ideapad.conf so that 'libkmod' is happy with it.  Or you can ignore the error.  If you do not use a Lenovo Ideapad, you can even delete the file.
« Last Edit: December 09, 2020, 06:25:33 pm by Nominal Animal »
 

Online newbrain

  • Super Contributor
  • ***
  • Posts: 1719
  • Country: se
Re: DOS vs. Linux
« Reply #141 on: December 12, 2020, 03:52:21 pm »
Since I use Linux for 99% of everything, I just used it to check my most common commands used:

Code: [Select]
$ awk '{print $1}' ~/.bash_history | sort | uniq -c | sort -n -k1 -r

As I have removing duplicates turned on, this only counts different variations of the same command, rather than the number of times actually used.

The (slightly redacted) results:

Code: [Select]
    659 cd
    600 git
    564 l
    489 sudo
    419 riscv64-unknown-elf-gcc
    388 ssh
    386 gcc
    344 man
    332 scp
    280 riscv64-unknown-elf-objdump
    246 cat
    242 rm
    219 less
    210 time
    192 arm-linux-gnueabihf-gcc
    159 clang
    144 find
    141 make
    136 mv
    120 echo
    113 mkdir
    104 emacs
     92 spike
     86 cal
     82 du
     81 objdump
     73 file
     72 ls
     70 cp
     69 top
     69 for
     65 arm-linux-gnueabihf-objdump
     63 pushd
     62 grep
     50 perl
     48 qemu-riscv64
     48 df
     41 which
     41 tar
     39 ifconfig
     38 riscv64-unknown-elf-ld
     37 riscv64-unknown-elf-as
     36 riscv64-unknown-elf-objcopy
     34 avr-gcc
     32 riscv32-unknown-elf-gcc
     31 qemu-riscv32
     30 llc
     30 ./configure
     28 export
     26 uptime
     26 cmake
     23 kill
     22 aarch64-linux-gnu-gcc
     21 seq
     21 dos2unix
     20 size
     20 /home/bruce/software/arduino-1.8.10/hardware/tools/avr/bin/avrdude
     19 ping
     18 cut
     18 avr-objdump
     16 tail
     16 python
     16 opt
     15 perf
     15 minicom
     15 lsblk
     14 uname
     14 popd
     14 aarch64-linux-gnu-objdump
     13 while
     12 zcat
     12 rmdir
     12 ln
     11 sort
     11 ../configure
     10 unzip
     10 mtr
     10 killall
     10 gunzip
     10 apt
      9 wc
      9 nslookup
      9 lsb_release
      9 chmod
      8 diff
      7 wget
      7 set
      7 pgrep
      7 awk
      7 apt-get
      6 sync
      6 asciidoc
      5 stty
      5 strings
      5 riscv64-unknown-linux-gnu-gcc
      5 curl
      5 crontab
      4 vi
      4 pwd
      4 lsusb
      4 date
      4 cmp
      3 unix2dos
      3 touch
      3 shasum
      3 screen
      2 su
      2 strip
As I use Linux, FreeBSD, but also Windows and PowerShell, I just did the same (list shortened):
Code: [Select]
C:\Users\newbrain> get-content (Get-PSReadlineOption).HistorySavePath   | %{ $_.Split(" ")[0]} | Group-Object | Sort-Object -Descending -Property Count | format-table -Property Count,Name

Count Name
----- ----
  288 git
   31 Import-Module
   23 .\iperf3.exe
   18 format-hex
   17 Get-Content
   16 pip
   16 $hex[0].ToString(
   16 exit
   16 cd
   15 ping
   15 C:\SysGCC\arm-eabi\bin\arm-eabi-gdb.exe
   11 ssh
   10 Get-Module
    9 $hex
    8 &
    8 Enter-PSSession
    7 wsl
    7
    6 subst
    6 python
    6 tracert
    6 help
    5 nslookup
    5 where.exe
    5 scp
    5 ipconfig
    4 .\Documents\iperf-3.1.3-win64\iperf3.exe
    4 Update-Module
    3 $hex[0].Bytes[0]
    3 New-Item
    3 runas
    3 dir
    3 bash
    3 python.exe
    3 wslconfig.exe
    2 rm
    2 Get-Member
    2 wsl.exe
    2 Get-FileHash
The results are a bit skewed as this is my machine in Italy, not the main one in Sweden, but git dominates nonetheless...
« Last Edit: December 12, 2020, 03:55:10 pm by newbrain »
Nandemo wa shiranai wa yo, shitteru koto dake.
 

Offline c.coyle

  • Newbie
  • Posts: 3
  • Country: us
Re: DOS vs. Linux
« Reply #142 on: December 12, 2020, 05:02:32 pm »
I learned DOS way back before Windows and picked up on it quickly. Today I find it's still easy to navigate and remember commands. I've also used Linux (or Unix?), however, I find it's extremely confusing.

Linux seems like so many hidden commands. Recently I used DD to image a drive, and it took a very long time to figure out if my drive was connected, which one it was, and kept having command errors. . . .

You don't say a whole lot about exactly how you currently use Windows.

I switched from Windows to Linux Mint about 4 years ago and it's been great for what I need.  Mint with the Cinnamon desktop is very Windows-like.  Updates are easy. And better and more secure in my opinion, because I can decide if I don't want a specific update, and I need to enter a password for any program to be installed, uninstalled, or updated.  I'm sure it can be defeated by a sophisticated hacker, but it seems a lot more secure than windows.

As far as "commands," I don't have a need to enter commands from the terminal (command line) very often.  When I do, I can usually find cut and paste commands online.  I'm sure if I had to use the terminal regularly, I'd pick up frequently used commands, just like I did way back in the DOS days.

The only downside for me has been a very few Windows programs that beat any Linux equivalent, two being Irfanview and Total Commander. 
 

Online DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: DOS vs. Linux
« Reply #143 on: December 12, 2020, 06:13:56 pm »
Is there an opensource equivalent for SolidWorks ?
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: DOS vs. Linux
« Reply #144 on: December 12, 2020, 06:46:18 pm »
Is there an opensource equivalent for SolidWorks ?

Really, no.

Open source mechanical engineering is severely lacking, it's much further away than open source PCB EDA (many people are having success with KiCad, after all).
 

Offline gnif

  • Administrator
  • *****
  • Posts: 1676
  • Country: au
Re: DOS vs. Linux
« Reply #145 on: January 02, 2021, 01:23:45 am »
No experience with SolidWorks, but learning FreeCAD atm which seems very powerful and nice to use.

Designed this enclosure and printed it out the other week, leaning as I went.
 

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
Re: DOS vs. Linux
« Reply #146 on: June 11, 2021, 02:51:30 am »
I'm uncertain whether to bring up an old topic, but tonight I did another update like I had done in a previous message on a laptop that had been sitting around.

Again, I'm not sure what these types of "errors" mean. Does this mean the update didn't get done correctly, is it an error that retried and eventually succeeded, etc...?
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11259
  • Country: us
    • Personal site
Re: DOS vs. Linux
« Reply #147 on: June 11, 2021, 03:07:25 am »
Do you get an error dialog? If not, then it is fine.

You are looking at the behind the scenes log, those error messages may be normal.
Alex
 

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
Re: DOS vs. Linux
« Reply #148 on: June 11, 2021, 03:18:14 am »
Quote
Do you get an error dialog? If not, then it is fine.

I assume this would be a pop up window that would show after the update completed?

If so, I didn't get anything, however, maybe this is normal, but if I reran updates, it found a few more showing a size of 0kb (I'm guessing it was too small to show the size).
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11259
  • Country: us
    • Personal site
Re: DOS vs. Linux
« Reply #149 on: June 11, 2021, 03:38:12 am »
Yes, any sort of error dialog. If it just finished and showed a dialog with just an "Ok" or restart buttons, then it is fine. Those specific errors are a bit strange, but errors in the detailed log are not uncommon.

It would be helpful to know names of the packages. But if it goes away after the second update, then it is likely also fine.
Alex
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #150 on: June 11, 2021, 03:45:43 am »
In the past, userspace tools for loading and examining Linux kernel modules was called modprobe, provided by module-init-tools package.
For easier maintenance, kmod (and libkmod) was extracted from the Linux kernel, and is now used to provide modprobe.

(This is the background, and explains why the path is called /etc/modprobe.d/, but the tool that reports an error is libkmod.
Again, I'm not sure what these types of "errors" mean.
Your old kernel module blacklist file, /etc/modprobe.d/blacklist-ideapad.conf, uses syntax that was supported by old modprobe tools, but not the new kmod-based modprobe.  The new kmod-based modprobe didn't abort, only told you it would ignore those lines.

This file was used by old Linux distributions to work around BIOS/EFI/driver issues in some Lenovo Ideapads; at least some Legion Y720 backlight related issues.  Essentially, it told modprobe to not load certain drivers even if probing indicated the hardware supported by those drives is present.

Does this mean the update didn't get done correctly, is it an error that retried and eventually succeeded, etc...?
Like it says, the lines libkmod could not understand were ignored.  No operation was aborted or stopped; it just kept going.

So, if that file is used at all (and as far as I know, it is only used on Ideapads), some of the blacklisting rules are no longer enforced.
Does it matter?  Well, is it an ideapad?  Do all functions work?  Did the upgrade process remove the file (as it does not exist on most current Linux distributions anymore, I believe)?

If that file no longer exists, it did not matter even a tiny bit; it was just noise.  If your machine is not an ideapad, and the file still exists, you might get those warnings again later; you could just delete the file (although I'd check package manager which package owns the file, and if a config/data-only package, see if I could remove it without any side effects).

Updating a Linux distribution often works well, but sometimes this kind of "old configuration file that is no longer needed or conforms to current tools" issues do crop up.  They are not dangerous, just annoying.  Because of these, and accumulation of packages what were really only needed during a certain time period, not before or after, but the dependencies are circular or so complex the package manager does not autodetect they can be deleted already,  I like to occasionally (every couple of years) reinstall a new distribution.  I usually do a bit of testing first among the alternatives, to see which one is – maintenance/package selection wise – going in a direction I like.
It is easiest to do if you can easily swap storage media (SSD, HDD), so you can always go back if you decide the old one was better.
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: DOS vs. Linux
« Reply #151 on: June 11, 2021, 08:45:38 am »
The transition to kmod appears to have happened over five years ago and I'm not aware of any legitimate syntax of the old tool that wouldn't work today or that would start with 'k', 'l', or 'nnn'.

https://git.kernel.org/pub/scm/utils/kernel/module-init-tools/module-init-tools.git/tree/doc/modprobe.conf.sgml

IMO the whole blacklist file is borked for some reason.
 

Offline netmonk

  • Contributor
  • Posts: 26
  • Country: fr
Re: DOS vs. Linux
« Reply #152 on: June 11, 2021, 09:35:32 am »
...
EVERY SINGLE RESPONSIBLE SYS ADMIN ...
sleeps with a PERL cook book under the pillow..
...

Well after 20 years doing sysadmin in critical business, i just discover i'm an Irresponsible Sys Admin.
Never touched a line of perl in 20 years of Sysadmin, and for mental hygien it will be the same until the end.
Like Java, but that's another topic :)
 
The following users thanked this post: newbrain

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #153 on: June 11, 2021, 09:43:13 am »
I was trying to recall if one of the distros provided a blacklist-ideapad.conf that used a nonstandard comment separator that worked with original modprobe (even though as an undocumented feature), but not with the new one.  I think so, but that's just a very vague itch at the back of my mind; we'd need to see that file to be sure.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #154 on: June 11, 2021, 09:51:04 am »
Never touched a line of perl in 20 years of Sysadmin, and for mental hygien it will be the same until the end.
I got PTSD from trying to maintain and extend a certain Perl-based groupware around the turn of the century, for a couple of years.  Full of slightly modified copy-paste code, no real design, as stuff just ... agglomerated together.  For any important Perl expression, the most obscure and obtuse one was invariably chosen.  I don't know if it was a job-security thing the original developers did that or what, but that was absolutely a maintenance hell.  I so wanted to rewrite it from scratch!

I still don't like to touch Perl at all.

In fact, I just checked, and the sources are still available at SourceForge, untouched for almost two decades... I am tempted to download the sources and see if it really is as bad as I remember, but for the sake of my mental health, I better not.

(No, not actual PTSD.  Just memories of frustration and cursing the developers for just slapping things together with spit and bubblegum, and leaving bug fixing for others to worry about.)
« Last Edit: June 11, 2021, 09:53:24 am by Nominal Animal »
 

Offline bostonmanTopic starter

  • Super Contributor
  • ***
  • Posts: 1790
  • Country: us
Re: DOS vs. Linux
« Reply #155 on: June 11, 2021, 01:24:37 pm »
In the defense of Linux, Windows updates are less informative. At least Linux shows line-by-line as it updates, however, I guess with so much information, it causes more questions.

 

Online PlainName

  • Super Contributor
  • ***
  • Posts: 6843
  • Country: va
Re: DOS vs. Linux
« Reply #156 on: June 11, 2021, 02:05:01 pm »
Quote
I guess with so much information, it causes more questions

Indeed. If you get used to seeing that stuff you'll ignore the one actual important line. But where does the problem originate in this example? The parser is working correctly by saying it's ignoring those lines (that's a valid warning and might be imortant - the user of the util can only decide that). The higher level stuff seeing those warning should pass them on since they are warnings, etc. No-one is really responsible for saying "Yes, that's perfectly benign". So perhaps the fix here is to actually fix the source of the warning, which isn't the util but the configuration file.
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #157 on: June 11, 2021, 03:13:47 pm »

I still don't like to touch Perl at all.



I am in quite the opposite side...

I deal with PERL since the 90s - and it took me quite a while
to master a reasonable degree of comfort.. 

I am not close to the experts but I am pretty clear in a state
where I can say that PERL it the most competent
and clever language to do SysAdmin.

But!! not only that. Once you manage enough .. you will be
so easy going that PERL can actually do anything SHELL
and Object related in GTK QT Wx Curses or whatever you imagine

There is literally EVERYTHING in CPAN ready and PDL per si
is way above anything numeric as of today.

so IMHO - there is nothing even close to PERL as it is
very next to the OS layer and way above smart to do things..

but it takes some time and effort.
Paul
« Last Edit: June 11, 2021, 04:44:11 pm by PKTKS »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6260
  • Country: fi
    • My home page and email address
Re: DOS vs. Linux
« Reply #158 on: June 11, 2021, 04:22:46 pm »
but it takes some time and effort.
Yeah, I've used a hammer to hammer in screws, so I know what you're talking about.

Me, I'm not that keen on any particular hammer.  I like screwdrivers, adhesives, wood joints, pegs, bolts, welding too.
 
The following users thanked this post: newbrain

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #159 on: June 11, 2021, 04:32:13 pm »
but it takes some time and effort.
Yeah, I've used a hammer to hammer in screws, so I know what you're talking about.

Me, I'm not that keen on any particular hammer.  I like screwdrivers, adhesives, wood joints, pegs, bolts, welding too.

nahhh  may be .. just may be a steep step to jump in PERL

to be really comfortable in such a wide powerful environ..

But once that initial steep step is done..

There will be no doubt that PERL is some order of magnitudes
ahead other alternatives.. 

I agree with you.. the initial thing is not soft.
More like a slap on the brain

Paul
« Last Edit: June 11, 2021, 04:44:32 pm by PKTKS »
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: DOS vs. Linux
« Reply #160 on: June 11, 2021, 04:40:32 pm »
Hell is other people's Perl.
 

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: DOS vs. Linux
« Reply #161 on: June 11, 2021, 04:45:33 pm »
Hell is other people's Perl.

Whole shit.. you just made me aware I ve been
type typo'ing   PEARL in PERL..

what a kid keyboard... shit ... fixed.  ::)

But you are absolutely right..
the best logic of one brain is not for the other..

and PERL allows just any possible brain logic to be expressed

agreed.
Paul
« Last Edit: June 11, 2021, 04:49:24 pm by PKTKS »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf