EEVblog Electronics Community Forum
Products => Computers => Programming => Topic started by: renzoms on March 13, 2021, 06:10:40 pm
-
Can you guys help me with an ELI5 on what some programming things are like "cmake", "git, pip, choco", "-sudo", "apt", and I just started downloading Intel's oneAPI toolkit, what's API. I'm looking for general non-technical answers like "well cmake opens up a file from the internet and downloads it"
For example, I've used git to pull some things but all of these keywords/commands and acronyms are going over my head.
I didn't know there was a difference between UNIX and cmd until today when "ls" wouldn't work in cmd.
Afaik a bash is a unix shell and a shell transforms code into commands and bash scripts are files containing commands executed by the shell and this explanation is more technical than I'm looking for.
If I didn't mention a common keyword or program/software please describe it super simply, too...
Can you guys help me learn what these things are with like "unix is the window where you code", "git installs keywords like get and pull that connect you to github files", "you need (pip/choco/cmake/-sudo) because the cmd window..." "pip/choco are the same they..."
Super non-technical, oversimplified explanations of these acronyms and components of programming and software so I can learn!
Also I've coded using dynamic structures in c, have many hours and projects and have used git, but right now I've got MINGW64 and cmd open, MINGW64 says cmake not found, but cmd found cmake, and then CMAKE_Fortran_COMPILER popped up so I tried to download fortran from Intel's oneAPI toolkit and now I'm downloading 20 gb of libraries and compilers and I feel helpless and that I'm bottlenecking my programming ability by not knowing what all of this is. To me they are just acronyms that I keep copy and pasting blindly and it's confusing to me.
Maybe this can help other people in the future.
Posted this on reddit: https://www.reddit.com/r/cprogramming/comments/m4bn8l/can_you_guys_help_me_with_an_eli5_on_what_some/ (https://www.reddit.com/r/cprogramming/comments/m4bn8l/can_you_guys_help_me_with_an_eli5_on_what_some/)
Also does anyone know how I can share this on eevblog/forum/beginners for more help potential? Thanks.
Also where is a good place to ask general questions like this? Because StackOverflow would delete this.
-
I got some great advice here: ELI5 on what some programming things are like "cmake", "git, pip, choco", "sudo" - Page 1 (eevblog.com)
I will "Pick a single topic and research it until you understand. If it is still above your level, pick a simpler topic. (Because) People spend most of their working career just trying to keep up with the technology."
I hope to keep gathering contributions here, ELI5 style, that will help me in my learning goal.
-
cmake git , pip choco apt are command line programs which do some things.
cmake is a program which reads a specially crafted file which contains instructions on how to compile a program and produce an application. When you write a program, you can write the source code in multiple files, you can have resources (icons, texts in multiple languages), your source code may use functions from other libraries so the compiler needs to know where those libraries are ... cmake automates compiling all the source code, checking all the libraries your code may need (and downloading the source code for those if needed)
git is a repository, a versioning system... it's a way of storing your source code and all the previous versions of it somewhere. Each time you make a change to your source code, you can save the source code to your computer and then use a command to send the new version to the versioning system (git) and git analyzes the files and determines the changes and stores just the changes between files.
This way, any time you want, you can retrieve any version of your source code (the git application constructs the files using the information it memorized)
UNIX is an operating system, Linux is an operating system, Windows is an operating system.
Linux and Unix are mainly command line but there are programs which create a GUI like Windows.
Windows is designed to have a GUI (graphical user interface) and programs have windows but Windows can also run programs in command line mode, programs which don't create windows, just like Linux programs.
The operating systems have a command line interpreter ... an application which takes commands from user and does something with them.
In Windows, that's cmd.exe , or powershell ... when you type cmd, a command line interpreter opens up and you can type commands ... you can type DIR to see contents of a folder, or other things.
In Linux, there's multiple programs that can work like cmd, one of the most common is called bash ... you can type the names of programs to open them or you can type some specific commands like for example LS to list the contents of a folder ... approximately same purpose as the DIR command in Windows.
A lot of the problems you may have with programs not being found are probably related to this : the folder paths to those folders are not in your PATH environment variable.
Basically, if you type cmake in cmd , by default cmd will look for cmake.exe in the specific folder you're in, and if there's no cmake.exe there, it will look into a few folders like C:\Windows for example, and then it looks through a bunch of folders that are explicitly specified in the PATH environment variable. If cmake is not found in these folders, it will say cmake is not found.
You must either always say the full path to cmake, for example instead of saying "cmake" you would type "c:\program files\cmake\cmake.exe" or you would add c:\program files\cmake folder to your PATH environment variable.
See this video about how you add folders to your PATH variable:
https://www.youtube.com/watch?v=q7PzqbKUm_4 (https://www.youtube.com/watch?v=q7PzqbKUm_4)
-
This is a duplicate of a thread in the Beginner forum.
-
I am a bit wary of dipping my toes in this topic, because I've found long-time Windows users encounter the most frustration when learning Unix/Linux; much more so than completely inexperienced users. That is not a value judgment, just an observation over a couple of decades of helping people learn, and has to do with having to let go of already learned "facts" (assumptions and observations that are only valid in one specific computer environment, and not all of them), before learning the new ones.
That said, the web is full of "Introduction to Unix/Linux" pages, tutorials, courses, and videos. While it would take a bit of time to lightly scan a few of these to find the one(s) that work for you (have the approach and complexity that suits you personally), it would be worth it to start there.
What follows is an informal description I like to tell people as a story in the first couple of hours, to get them on the right track; the right point of view, if you will. The purpose is not to have them remember really any of these details, but to build a mental picture, if you will, that while very hazy and fuzzy and unclear at the start, will work as a reliable foundation to build on: something where the pieces you learn will have their natural place to land on and fit like a jigsaw puzzle. I really don't like it in this wall-of-text form; but as an idle chat, or over a coffee, when the victim target is relaxed and unsuspicious, it really works to show the lay of the land. Feel free to :-DD at me if you want, but I do claim this works.
To the description.
Computers are dumb collections of devices. You have storage space for stuff in the form of hard drives, solid state drives, USB sticks, SD/microSD memory cards, and so on. You have working memory, RAM, that loses its contents whenever the computer loses power. You have one or more processors, with one or more cores – like the wedges in an orange – that manipulates the data, constructs the visuals you see, and so on. You have buses – different types of connections – connecting the processor and working memory to storage and peripherals, like network controllers (both wired/Ethernet, and wireless). Most computers have a graphics adapter, which takes a numerical description of what to display and how, and generates the electrical signals to display that as a picture or a video on a display. Some have more than one; and scientists often use those graphics adapters – because they often contain a lot of computing power for a specific type of calculations, much more than the actual processor in the computer – for simulations and such.
When the computer is turned on, it starts by assessing its own core parts, and initializing them to a known state. This part is done by the BIOS, nowadays typically of the EFI or UEFI variety. It is stored on a small Flash memory chip on the motherboard dedicated for this, not in any standard storage, and can be updated; but usually it is only necessary to add support for new processors or to fix a programming error in the BIOS. The very first things you see on your display when it gets a valid signal from your computer is provided by the BIOS. Usually, a keypress (varies; often DEL, F2, or F10) early enough will cause the BIOS to display a menu system where you can change the details on how the main parts of the computer are initialized and treated. The BIOS is an integral part of the motherboard, and does not change even if you change the processor, memory, or other peripherals like PCIe cards or hard disks, SSDs, et cetera.
After the BIOS has set everything up, it looks for bootable media (in order that you can select in the BIOS). The first one it founds, it uses. In current computers, this is a boot manager, a simple widget that lets you choose between operating systems (so you can have more than one installed on your storage media), and even pass information to that operating system nucleus, kernel. In Linux, you can tell it to use one subsystem instead of another.
Microsoft Windows comes with its own boot manager, as does Mac OS. Linux has several, with Grub (http://www.gnu.org/software/grub/) the commonly used one on Intel/AMD architectures; and U-Boot (https://www.denx.de/wiki/U-Boot/) on ARM-based architectures and embedded systems. Both Grub and U-Boot are highly configurable, which also means they can be a bit daunting if the default/automatic settings happen to not work for a specific machine, and one needs to adjust their settings. But usually, the defaults work just fine.
The boot manager passes the control over to the operating system kernel. The kernel is like a janitor, scheduler, and valet all in one: it is where the buck stops, so to speak. To interface to peripherals and the graphics stuff, the kernel has modular parts called drivers, which slot into the kernel like a robot adding a new manipulator hand at the end of its modular arm.
In Unix and Linux, everything else is called userspace, including the desktop environment you are using right now. In Windows, the desktop environment is part of the kernel. Mac OS is even funkier, because it uses what is called a Mach microkernel, and its "attachments" (drivers and modules) actually live on the userspace side.
The main difference between kernelspace and userspace is that userspace is controlled and limited, and kernelspace is purely the kernels own domain. If something crashes in the userspace, the kernel just cleans it up, like a good janitor; no harm done. But if something crashes in the kernelspace, the kernel usually crashes or locks up as well, and you'll need to reboot the machine – losing all unsaved work – to get it to work again.
While the difference in where the desktop environment is in Windows and Unix and Linux sounds small, it is conceptually HUGE. In Unix and Linux, you can have any number of desktop environments your hardware can support; none of them are special in any way. Appliances and embedded devices running Linux, like routers and TVs, often have zero desktop environments, even when they have a display for one.
Instead of a desktop environment, Unix and Linux kernels do have one special process, called init. Its purpose is to bring up all the operating system services and daemons, and sort-of maintain them, until the machine shuts down or reboots. (It isn't that special, though: it has a small janitorial job of reaping dead processes whose real parents have already exited, and it always has process ID 1, and if it ever exits or crashes, the kernel will shut down or reboot the machine.)
There are several to choose from, from the venerable decades-old SysV Init, to the current default of systemd, and a half a dozen of others. You'll want to use the one your Linux distribution uses; just know that it too is just one modular piece in a Linux system.
If the OS has a desktop environment, that is one of the things the init system will start. It consists of a display server (typically X or Wayland), a greeter (the part where you supply your username and password to log in, and usually can choose which desktop environment/session/window manager you want to use), a window manager (which draws the title bars, buttons, menus, and window decorations in application windows), widget toolkit, a desktop application (which shows you the desktop icons), a file manager (that provides the folder views), and some services (perhaps a panel and applets, for displaying clock/calendar, network status, volume controls, currently running applications as tabs, and so on). These are all modular.
The difference between various Ubuntu variants like Kubuntu/Lubuntu/Xubuntu is that they default to a different set of desktop environments and associated applications. Ubuntu defaults to Gnome, Kubuntu uses KDE, Lubuntu LXDE, and Xubuntu XFCE. They look and feel different, but provide similar services. If you install one, you can still install the others. Most greeters have a menu where you can choose which desktop environment you will be using; I often recommend creating a test user account for each one (so you can remove the ones you like without leaving any tidbits that might interfere with the others), and having a go. Nobody knows what works for you best, until you try them out.
A typical Linux system has a graphical user interface, where one can run one of the terminal emulators, to get a text-based command-line interface. The init system usually also provides a few virtual consoles, text-based interfaces provided by an OS service called getty, which in turn runs a program called login that provides a text-based interface to log in (provide your username and password), so that you can get the same text-based command-line interface you can with a terminal emulator. The main difference between the two is that they are provided by completely different OS services, so even if one crashes, the other is probably still available. (A third, similar interface, is SSH: it provides a way to connect via the network, to get the same text-based command-line interface.)
Note that in Unix/Linux/Mac OS, there is only one file system, starting at root, /. To use a storage device, it is mounted as a subtree somewhere, nowadays usually /mount/username/media-name, but earlier often at /mnt/media-name. There are no separate drives, no A: or C:, just the one filesystem tree.
(There are chroot environments, BSD jails, and cgroups, that can limit a process to working on a specific subtree of the filesystem without any access outside, though.)
There is no single command-line interface, either: it too is just a service provided by a shell.
In old MS-DOS, command.com provided the DOS shell. In Windows, there is the traditional DOS prompt, and PowerShell, with their own syntax and semantics. You can also install WSL, which provides a somewhat Linux-like environment using a Bash shell.
Most common shell used today is Bash, which provides a superset of standard POSIX shell features, plus additional stuff. (POSIX.1 is IEEE Std 1003.1, an international standard established in 1988, and developed ever since, which defines a command-line shell, and standard C library features, so that users and programmers working in POSIX-conforming or at least POSIXy operating systems like most Unix and Linux, even Mac OS, can use.)
Because of the influence of POSIX, almost all command-line utilities nowadays show a short help (usage) when run with a -h or --help as a parameter. For example, running bash --help does not start a new Bash shell, but just outputs a short summary. Better yet, almost all command-line utilities are described in detail on man pages; with the Linux man-pages project (https://www.kernel.org/doc/man-pages/) being the central repository for the core parts, and each command-line utility providing their own. For example, if you forget which options you need to use with ls to show the details of the files, you run man ls. It shows a text rendering of man 1 ls (https://man7.org/linux/man-pages/man1/ls.1.html), which says that -l turns on the long listing format; so ls -l is the command you want.
Because typing the same commands over and over again is silly and slow, we usually use aliases. For example, I have ll (ell-ell) as an alias for ls -alF --color=auto, which uses a colorized long listing format and shows all files (including those that start with a . which are normally hidden). This is a feature provided by most shells, including Bash; and you can set these to be automatically available by setting them in Bash configuration, typically the .bashrc file in your home directory.
In the Unix/POSIX worldview, complex tasks are best achieved by combining many simple tools. Where bash or sh provides a shell, find can be used to find files based on their name or other properties, grep can be used to find specific content or patterns (regular expressions), sed can do replacements, awk is a powerful record/field (line/column) processing tool with a simple rule-based language, make can automate compilation and software building tasks via simple recipes (and as it compares file modification timestamps, it can detect what needs to be redone after modifications), and so on.
-
The best way to learn *nix is to actually use it. Use the GUI desktop for browsing and email but do everything else at the command line. Set up a machine with Linux (I prefer the Mint distribution) and use it exclusively.
You will find that just about everything that needs to be changed/repaired or upgraded will require a command line operation. With Linux, you will use the command line whether you like it or not.
Want to add a printer? Well, you can forget Plug and Play. First you install CUPS from the command line and then you mess around with the browser pointing to "localhost:631". It is a terrific pain in the ass but it's better than having to edit /etc/printcap. CUPS doesn't come preinstalled, you're on your own. In fact, you can't even use a serial port without adding yourself to the dialout group. Of course that assumes you have the vaguest idea that a) your port won't work without permissions and b) you have some notion of how to add your username to a group. There is probably a c), you know what a group is and you have some idea of how permissions work. 'sudo' will flow off your fingertips like fire.
The learning curve is steep and high. That is the reason that Linux only accounts for somewhat less than 2% of desktops after being around for 27 years or so.
And yet there are many code wienies who use Linux as their only system. They know all the magic incantations but that knowledge came with a price: Years of effort... Plus a LOT of time on Google!
That's why Windows is so popular. It simply works. You buy a printer that says Win 10 compatible and just plug it in. If it is on the network, Windows will find it. Of course you can use the serial port. There is no reason not! It's your computer!
But here's the thing: If I want to play with a Raspberry Pi, I better know Linux. If I want to play with nvidia's Deep Learning AI stuff, I must know Linux. All of today's magic is being done with Linux and there's a reason for that: The command line!
Unix had an idea: Create small tools that take input from stdin and produce output to stdout. Then use the output of one tool to provide the input to another via a pipeline. Allow redirection of stdin and stdout to allow for file IO instead of console IO. Several tools could be strung in series to perform some complex operation. Each tool did one thing but it did it very well. Unix also came up with the unified file system approach. Everything is a file. Linux copied the ideas.
-
Most Linux/UNIX utility command basic documentation, (excluding tutorials, FAQs etc.), is in man page (https://en.wikipedia.org/wiki/Man_page) format. There are several sites that put the man pages online for various Linux distros and compiler toolchains, so the first reaction of a Windows user encountering an unfamiliar utility command that isn't 'native' to Windows, in a programming/development context should be to google: man command_name
You can normally figure out enough from a man page to either use the command properly or get enough keywords to google a decent tutorial for it.
Of course if you are running Linux, UNIX or Cygwin, you can view man pages locally, though the man pages for the specific utility you are interested in may not be available if it wasn't installed through the distro's package management system. Also the default man page viewer is likely to use a text only console interface that is unfamiliar to windows users, so you may want to install a GUI man page viewer on your system.
-
I'm the first to point out my difficulties with Linux. I have been using it for over 17 years and I still don't know half of the commands. May never...
Buy a book, follow along. Here is a cheap book - free with KindleUnlimited. I haven't read it but it sounds like an introductory text.
https://www.amazon.com/Linux-Beginners-Introduction-Operating-Command-ebook/dp/B00HNC1AXY (https://www.amazon.com/Linux-Beginners-Introduction-Operating-Command-ebook/dp/B00HNC1AXY)
I tend to prefer paper books but I'll settle for eBooks if I have to. Web sites aren't as easy to deal with, the number of bookmarks tends to grow without bound. I pretty much remember the books on my bookshelves. There are many!
Nevertheless, all of human knowledge is on Google. There are videos and text based tutorials all over the place.
The common theme in all of these is the time it takes. It just takes a lot of time to learn this stuff.
-
The best way to learn *nix is to actually use it. Use the GUI desktop for browsing and email but do everything else at the command line. Set up a machine with Linux (I prefer the Mint distribution) and use it exclusively.
You will find that just about everything that needs to be changed/repaired or upgraded will require a command line operation. With Linux, you will use the command line whether you like it or not.
Want to add a printer? Well, you can forget Plug and Play. First you install CUPS from the command line and then you mess around with the browser pointing to "localhost:631". It is a terrific pain in the ass but it's better than having to edit /etc/printcap. CUPS doesn't come preinstalled, you're on your own. In fact, you can't even use a serial port without adding yourself to the dialout group. Of course that assumes you have the vaguest idea that a) your port won't work without permissions and b) you have some notion of how to add your username to a group. There is probably a c), you know what a group is and you have some idea of how permissions work. 'sudo' will flow off your fingertips like fire.
The learning curve is steep and high. That is the reason that Linux only accounts for somewhat less than 2% of desktops after being around for 27 years or so.
And yet there are many code wienies who use Linux as their only system. They know all the magic incantations but that knowledge came with a price: Years of effort... Plus a LOT of time on Google!
That's why Windows is so popular. It simply works. You buy a printer that says Win 10 compatible and just plug it in. If it is on the network, Windows will find it. Of course you can use the serial port. There is no reason not! It's your computer!
But here's the thing: If I want to play with a Raspberry Pi, I better know Linux. If I want to play with nvidia's Deep Learning AI stuff, I must know Linux. All of today's magic is being done with Linux and there's a reason for that: The command line!
Unix had an idea: Create small tools that take input from stdin and produce output to stdout. Then use the output of one tool to provide the input to another via a pipeline. Allow redirection of stdin and stdout to allow for file IO instead of console IO. Several tools could be strung in series to perform some complex operation. Each tool did one thing but it did it very well. Unix also came up with the unified file system approach. Everything is a file. Linux copied the ideas.
I'm old and as a result, my recollection of history varies from yours very significantly where Linux is concerned.
By your logic, Internet Explorer was far more popular that Netscape Navigator because Internet Explorer was on everyone's Windows PC ?
You may recall that the DOJ had a different opinion and convicted Microsoft of Sherman Trust violations by way of their retail channel monopoly with the PC.
I submit that the reason Linux or other Unixes aren't on more desktops is *exactly* the same. Try buying a Linux desktop in any large office goods outlet ? It's *still* Windows only. Microsoft still exert an illegal monopoly on all those retail channels.
-
Raspberry Pi, I better know Linux
Windows embedded runs on RPI as well as on several industrial machines, and there are professional development systems based on visual c/studio
If I want to play with nvidia's Deep Learning AI stuff, I must know Linux
Windows embedded has support for this. Not for free, that's the main big difference :D
I don't like Windows embedded, but I am paid for programming on it.
-
I am a bit wary of dipping my toes in this topic, because I've found long-time Windows users encounter the most frustration when learning Unix/Linux
my frustration hits like a punch when I have to support NetBSD/OpenBSD. Damn, a lot of things are GNU/Linux-only oriented and when you try to move them to BSD ... you find a lot of incompatibilities. Nothing that cannot be fixed but it consumes a lot of time.
Unix and Linux should be closer each others and friends.
-
Well, there are plenty of used computers that could easily be upgraded to Linux from Win 97 or whatever. In fact, I have 3 laptops that have been converted along with a dual-boot tower and a stand-alone tower.
You can buy Linux machines from Dell
https://www.dell.com/en-us/work/shop/overview/cp/linuxsystems (https://www.dell.com/en-us/work/shop/overview/cp/linuxsystems)
Many people build their own systems from parts. I don't know how many install Linux but it can't be many. The simple fact is, all of the easy-to-use production tools show up on Windows first. Simply because it is a bigger market. That and they can't actually sell software into the open-source community. Those folks want everything free and open. That's a problem for proprietary information.
So, few companies develop for Linux, thus there is a limited selection of tools available on Linux and, ultimately, Linux is a poor choice for most users. They know this because they keep buying Windows machines. Really, compatibility is everything.
I am agnostic in the debate over Linux versus Windows, I have about the same number of machines on each. I probably use Windows more often for things like mail and browsing but most of my code projects are on Linux.
The silly Europeans complaining that Internet Explorer came free with Windows and fining Microsoft for not including their competitors' browsers was the height of absurd. Were I Microsoft, I would have quit selling software into Europe. If they wanted Windows, they would have to order it separately from a 3rd party country. That would keep them in the stone ages!
I like Linux, I use Linux but I still maintain that almost everything is more difficult. A trip to Google will usually solve the problems but it won't always be an intuitive solution. What do you mean I don't have access to my serial port, it's my machine!
-
That and they can't actually sell software into the open-source community. Those folks want everything free and open.
Not necessarily, but that's a typical view of someone who doesn't understand the ecosystem.
The silly Europeans complaining that Internet Explorer came free with Windows and fining Microsoft for not including their competitors' browsers was the height of absurd. Were I Microsoft, I would have quit selling software into Europe. If they wanted Windows, they would have to order it separately from a 3rd party country. That would keep them in the stone ages!
By that logic you'd not have Windows either, because your own government accused Microsoft of monopolistic behaviour, and judged them guilty. But of course, it must be Europe at fault.
You're welcome for computers, by the way. And telephones, radar, jet engines, rockets.. the entire existence of your country..
-
They know this because they keep buying Windows machines.
Ansis, SolidWorks, Rhinoceros, Avoget, Tina ... is why I keep buying Windows machines.
And, ironically, it's also why I keep dreaming about running Haiku on my Japanese PDA.
Haiku comes from the legacy of BeOS, which sounds as strong vibes "the legacy of being an OS", which wow ... sounds so philosophically deep that I wonder which one will be the OS of the future? :o :o :o
Haiku is somehow Unix but much cleaner and simpler. Unfortunately it doesn't yet work on RPI, port_status (https://www.haiku-os.org/guides/building/port_status/), as well as Minix-v3 is still a researching thing, even if now it's compatible with NetBSD' packages.
-
I am a bit wary of dipping my toes in this topic, because I've found long-time Windows users encounter the most frustration when learning Unix/Linux
my frustration hits like a punch when I have to support NetBSD/OpenBSD. Damn, a lot of things are GNU/Linux-only oriented and when you try to move them to BSD ... you find a lot of incompatibilities. Nothing that cannot be fixed but it consumes a lot of time.
Unix and Linux should be closer each others and friends.
Linux started moving away from Unix some time ago, it annoyed me enough that in 2016 I moved my desktop to FreeBSD and ZFS and never looked back.
I'm not saying that Linux doesn't rule the known world and doesn't deserve to, I'm old and I'd like to use my ancient Unix admin skills as long as possible, I'm not interested in Pulse Audio, Systemd and the many other Linux 'cool' new alternatives.
I'm still annoyed about the Linux switch from LILO to GRUB. :box:
-
I'm still annoyed about the Linux switch from LILO to GRUB. :box:
I want my /dev/modem back!
One book I can recommend if the OP wants a practical approach at things like cmake and git, (but not pip or choco) for both Windows and Linux is "21st Century C, 2nd Ed." by Ben Klemens (https://www.oreilly.com/library/view/21st-century-c/9781491904428/).
-
I am a bit wary of dipping my toes in this topic, because I've found long-time Windows users encounter the most frustration when learning Unix/Linux
my frustration hits like a punch when I have to support NetBSD/OpenBSD. Damn, a lot of things are GNU/Linux-only oriented and when you try to move them to BSD ... you find a lot of incompatibilities. Nothing that cannot be fixed but it consumes a lot of time.
Unix and Linux should be closer each others and friends.
Linux started moving away from Unix some time ago, it annoyed me enough that in 2016 I moved my desktop to FreeBSD and ZFS and never looked back.
Both quite true. For software development, what we often call Unix, is actually POSIX: IEEE standard 1003.1. There were many flavours of Unix, and they too differed; that's why they created the POSIX standard over three decades ago. As a standard, it is as old as the C standard (although C was created in 1972-73, it was first standardized as ANSI C in 1989 and as ISO/IEC 9899:1990 in 1990). It is sad that so few programmers know how they complement each other.
Very few teach POSIX programming at all. You can see this from all the examples and exercises that use fgets() instead of getline(), opendir()/readdir()/closedir() instead of nftw()/scandir()/glob() (or the BSD fts_open()/fts_read()/fts_children()/fts_set()/fts_close(), which are also available in the standard C library on Linux and Mac OS also). Very few know about iconv_open()/iconv()/iconv_close(), although it can solve basically all of ones character set issues; and so on.
Because so few realize that POSIX is the common glue among all – even though the Linux man-pages project (and thus all man pages in Linux) clearly state if a programming interface or function is POSIX or Linux-only or GNU-only in the Conforming to section –, it is no surprise that few if any programmers group the OS-specific details into a separate, easily refactored/added compilation units. (I am referring to OS kernel interfaces that do differ; Linux uses pseudofilesystems (/proc and /sys), whereas BSDs use syscalls. These differences cannot always be avoided, but they can be put in a separate header file and compilation unit, so that one generally only needs to write that one file to port the code to the new OS.)
(The above is a written excerpt of the "learning software development in C" chat that I give learners a better chance of understanding the differences and having better chance of using reliable/suitable sources for their information. It goes on describing how tools like make and git can be used to make life easier, how to use external tools like Graphviz for easily visualizing data structures using temporary debug output – how to do things the easy but efficient way. Depending on the interest, I sometimes veer early to Qt and GTK+, as getting immediate visual feedback of high quality can make a serious difference how steep the learning curve feels to an individual.)
-
Just passing here to add 2 cents of jambo....
I have seen the forum poll about "*NIX" distros...
Well.... as a veteran I can say for sure the following 2 cents void null IMHOs:
= ** IF ** anyone can look to Android and see any slight breeze of *NIX...
well.. please post what exactly Android has in common w/NIX
although we all know there is a Linux kernel and "some" GNU tools..
heavily deformed and twisted... a stupid brick.
= ** IF ** anyone can relate these "POTTERIX systemd based " DISTROS
with *NIX... well it is becoming clear that they diverted from that a long
time ago.. RH Debian and others just veered from *NIX for reasons I can
speculate a little.. as is clear the path to make the POTTERIX as close
as possible to that shitty MS OS.. so to place on top of that the buzzz...
I (myself) have veered from these POTTERIX thingos a long time ago.
As #NOMINAL said in some comments several points are enough to
consider any POTTERIX thing just unbearable as MS...
Fortunately there are several alternatives..
I just don't like to see as of today frequently comparisons among them..
And the mess that POTTERIX shit is spreading..
2 shallow cents of jambo :blah: :blah: :blah:
Paul
-
please post what exactly Android has in common w/NIX
At the systems programming level (i.e., if you write C), it is very POSIXy: not POSIX certified, but almost completely POSIX-compliant. The differences to POSIX are rather obscure, and are the same as Linux differences to POSIX; mostly odd corner cases.
The thing that made Unix strong, simply put, was the philosophy (https://en.wikipedia.org/wiki/Unix_philosophy), especially modularity and the KISS principle (https://en.wikipedia.org/wiki/KISS_principle) (sort of minimalism).
As an example, the Linux kernel developers had to go for a modular, easily refactored structure – even though it is a monolithic kernel and not a microkernel – because they had to, to be able to keep up with the rate of changes; which is utterly ridiculously high if you look at it as a project. But it is what makes it maintainable in the long term.
From a system administration, security, and maintenance perspective, the modularity also allows one to build systems with as few reliability bottlenecks, single points of failure (https://en.wikipedia.org/wiki/Single_point_of_failure), that when they fail bring down the entire system.
The vehement opposition you often see against systemd ("potterix") is not irrational: it is based on sound engineering principles. You see, systemd is NOT just an init system: it has become an entire encompassing layer in the userspace; a new mandatory single point of failure. Its code quality is very low (most of their submissions to e.g. the Linux kernel have been rejected for being utterly silly and full of bugs; just see the kdbus saga, which was submitted by Greg KH, a knowledgeable and helpful fellow), and not only it breaks the unix principles (by being a complex monolithic layer that tries to be everything to everyone), but it also makes it impossible for system administrators to follow the unix principles. In one fell swoop, it replaces the Unix paradigm with its own: The System Management System.
Note that there are a lot of system administrators and system integrators who never saw the value of the Unix principles, and are perfectly happy with systemd. It is still possible to create a Linux distribution without systemd (for example, Devuan (https://www.devuan.org/), or one of the source-based distros like Gentoo), but it is becoming more and more difficult, as systemd integrates previously modular components to itself, because it intends to be the only viable System Management System in the Linux world. It is not a nefarious purpose per se, because such a single system is very attractive to certain commercial companies, especially IBM and Red Hat.
I have bee observing the Debian shift from modular init to systemd-only via the debian CTTE (https://lists.debian.org/debian-ctte/) mailing list, and I must say, the systemd-related maintainers seem to be willing to do a lot of extra work to ensure Debian can only run if systemd is installed. Debian being the root for most non-Red Hat distributions (including Ubuntu, Mint, and MX Linux), it makes commercial sense for Red Hat et al. to hire people to do that, if the intended target is to capture the majority of Linux installations with a root-level service that has absolute control over the system. Since there is no perkele-shouting benevolent dictator for Linux userspace, there is nobody to stop them doing whatever their employer wants them to do. (Linus Torvalds often sides with users against other kernel developers proposing changes that break things; that's also the point where he used to curse at devs he saw as providing work without putting enough effort in it, not writing code at the level Linus believes they can when they want to. Sadly, that kind of push against idiotisms and poor lax coding is no more, because of Political Correctness.)
(Funnily enough, a large number of systemd core developers have an antagonistic relationship with the Linux kernel developers; their submissions having had issues with sensibility, reliability, and fitness for purpose. Again, see the kdbus debacle, or the kernel debug option debacle – the systemd developers decided that it should be for systemd to use it, and caused it to spew so much logging output that most systems became undebuggable because the logs were full of systemd stuff, pushing the kernel information out before it could be captured when problems occurred. And refused to fix it, because "the kernel does not own the kernel command line".)
So, yeah, there is a commercial push to move Linux closer to the kind of non-modular OS userspace that Windows has, and it is definitely non-UNIXy.
Whether you consider that good or bad, feel free to make up your mind. I'm only pointing out some of the facts I see relevant here.
On the systems programming level, POSIX is still going strong, although one should realize that Windows' WSL is not completely POSIXy; just enough to let many Linux applications to compile and run. It is easy to write code that works in all POSIXy systems (Linux, MacOS, BSDs), but not in WSL.
The key here is to realize that POSIX is the real international standard behind what unified Unix, and what still unifies Linux and BSDs and MacOS, and which Microsoft still hesitates to embrace; you may have to install Linux for example in a virtual machine, or on a separate machine – there are lots of cheap single-board Linux machines with ARM processors you can use. POSIX itself is not perfect by any means, but it is a stable, known working standard, that nicely complements C, but also includes system administration tools et cetera.
-
The key here is to realize that POSIX is the real international standard behind what unified Unix, and what still unifies Linux and BSDs and MacOS, and which Microsoft still hesitates to embrace; you may have to install Linux for example in a virtual machine, or on a separate machine – there are lots of cheap single-board Linux machines with ARM processors you can use. POSIX itself is not perfect by any means, but it is a stable, known working standard, that nicely complements C, but also includes system administration tools et cetera.
A nice way to get started might be the Raspberry Pi 400 computer. The only external requirements are a mouse and monitor and the mouse is included (but a useless 2 button model). $100 for a fairly reasonable desktop isn't bad.
Yes, I have some...
https://www.raspberrypi.org/products/raspberry-pi-400/ (https://www.raspberrypi.org/products/raspberry-pi-400/)
-
A nice way to get started might be the Raspberry Pi 400 computer. The only external requirements are a mouse and monitor and the mouse is included (but a useless 2 button model). $100 for a fairly reasonable desktop isn't bad.
There are several SBCs based on Amlogic Meson (http://linux-meson.com/doku.php) support in upstream Linux (i.e., supported in standard distributions), often used as media centers. One interesting one is Odroid N2+ (https://www.hardkernel.com/shop/odroid-n2-with-4gbyte-ram-2/) (with 4GB RAM), with only u-boot on the boot flash, and an USB-3.0 SATA adapter and SATA SSD. It's much faster than 'Pi 4, and is supported by many distributions (https://wiki.odroid.com/odroid-n2/os_images/third_party). Thing is, if one decides one doesn't like Linux after all, one can install Android and turn it into a media center.
(Using an SSD for storage makes a big difference, but you'll need USB 3 and an USB to SATA adapter – or you can use an external USB 3.0 hard drive dedicated for the SBC.)
Linux SBCs all have their quirks, so before buying the hardware, I recommend looking up the user forums discussing the devices and their real-world use. People prefer different communities, so I believe it is best to find the community one feels one can best participate in with your own use cases in mind, and then choose suitable hardware. No hardware is perfect, and the idea is to learn anyway, so the human aspects can be much more important than the technical aspects.
Some find the Raspberry Pi community cozy. I could never fit in, but that's most likely my own fault.
-
This may differ depending on where you live. At the school where I work we buy second hand laptops from a supplier specializing in refurbishing that five year old office equipment that is tossed by routine. We have been getting, in bulk though, perfectly fine Elitebook 820s for about what a new raspberry pi 400 would cost here. Even at twice the price I would consider that a better buy. Unless you are explicitly looking for something ARM based.
-
This may differ depending on where you live. At the school where I work we buy second hand laptops from a supplier specializing in refurbishing that five year old office equipment that is tossed by routine. We have been getting, in bulk though, perfectly fine Elitebook 820s for about what a new raspberry pi 400 would cost here. Even at twice the price I would consider that a better buy. Unless you are explicitly looking for something ARM based.
Or you really like the 40 pin IO header for more physical pursuits.
If you have a source of cheap laptops, why not? Just download and install Mint and you're good to go.
I have personal laptops of various generations and have converted all but one to Linux. The other one I upgraded to Win 10 from Win XP. Worked out fine! I installed a large SSD instead of the HDD. Fairly decent performance from a 10 year old laptop.
-
This may differ depending on where you live. At the school where I work we buy second hand laptops from a supplier specializing in refurbishing that five year old office equipment that is tossed by routine. We have been getting, in bulk though, perfectly fine Elitebook 820s for about what a new raspberry pi 400 would cost here. Even at twice the price I would consider that a better buy. Unless you are explicitly looking for something ARM based.
Or you really like the 40 pin IO header for more physical pursuits.
Its worth noting that a Pi's I/O header is fully accessible over your LAN from Python or C programs running on a PC (or other TCP/IP capable computing device) if you use the PiGPIO library (http://abyz.me.uk/rpi/pigpio/) and daemon.
A Pi Zero W, connected by WiFi is dirt cheap and has ZERO risk of blowing anything on the PC if it suffers a mishap.
If PiGPIO isn't sufficient, SSH or VNC into the PI from the PC. Its somewhat of a PITA to get set up headless, but that can be alleviated by configuring the Pi as a RNDIS device (USB Ethernet Gadget) and USB tethering it till you've got the WiFi credentials entered and tested.