Author Topic: (solved) Some Linux questions from a noob  (Read 1784 times)

0 Members and 1 Guest are viewing this topic.

Offline LoganTopic starter

  • Frequent Contributor
  • **
  • Posts: 345
  • Country: us
(solved) Some Linux questions from a noob
« on: March 23, 2022, 09:48:44 pm »
Hi friends.

I'm an experienced Windows user, but Windows 7 is the last OS I feel good about, and it's over now, so I want to switch to Linux.

I have tried Linux before, but never understand it deeply. I want to control it as good as I do with Windows. After many readings online, I still have some question remained:

"sysctl" seems to be the primary system level settings, right? Are there any "hidden settings" other than sysctl?
It seems "sysctl" is not permanent, the OS boot at default values anyway, and "sysctl" apply custom settings after boot. How can get around this?

It seems "systemctl" is the main way to control programs start at boot, both system services and user ones? Are there other ways for a program to start automatically(either on boot or on whatever trigger) without being shown in "systemctl"?

When use package managers(such as apt), does it always verify the downloaded contents?(Assuming I'm on a very rogue network) I feel unsafe because I always download from trusted sources through https on Windows, but in Linux world people mainly rely on package managers. Are there anyway I can check that the package manager is actually verify the files before installing anything?

I always feel unsettled with "oom-killer"(which is non-exist on Windows), is there a way to completely disable it? (I read tons of pages and no answer for this)

That's all question remain for now, maybe I'll have some new questions later.

Thank you all.
« Last Edit: March 25, 2022, 06:41:13 pm by Logan »
 

Offline nightfire

  • Frequent Contributor
  • **
  • Posts: 585
  • Country: de
Re: Some Linux questions from a noob
« Reply #1 on: March 23, 2022, 10:20:47 pm »
sysctl: The Unix philosophy generally is to make changes at runtime non-permanent (as opposed to windows). For making them reboot-safe, you need to edit the appropriate config file.  As sysctl does a bit deeper stuff, you really should care for what you do, because normally you should not use sysctl in first instance.
Advantage here: In case you messed up, a clean reboot fixes everything at once.

Package manager: Depending on  the distribution there may be differences. Usually the most package managers will check a download against a checksum (or several checksums) to ensure integrity before proceeding.
 
The following users thanked this post: Logan

Offline ve7xen

  • Super Contributor
  • ***
  • Posts: 1193
  • Country: ca
    • VE7XEN Blog
Re: Some Linux questions from a noob
« Reply #2 on: March 23, 2022, 10:59:40 pm »
"sysctl" seems to be the primary system level settings, right? Are there any "hidden settings" other than sysctl?
It seems "sysctl" is not permanent, the OS boot at default values anyway, and "sysctl" apply custom settings after boot. How can get around this?

sysctl is the primary way to configure the kernel and its subsystems/drivers. There are other elements of the 'system' that are configured differently. How the settings are initialized at boot depends on the distribution. Most modern distributions you can put the settings into either /etc/sysctl.conf or a .conf file under /etc/sysctl.conf.d/ (see `man sysctl.conf`.

Modules/drivers that need to be configured when they are loaded can have their options passed at module-load time. You can do this manually when you modprobe the module, or add the desired options to a .conf file in /etc/modprobe.d (see `man modprobe.d`).

For things that need to be set at early boot, you can also pass most of them on the kernel command line from the bootloader. Again, how to define the kernel command line is part of the bootloader configuration, so depends on which bootloader your distro is using and how it builds its configuration. Often this lives in /etc/default/grub, but you will probably have to rebuild the bootloader setup after modifying it. See https://www.kernel.org/doc/html/v5.17/admin-guide/kernel-parameters.html and your distro's docs.

Quote
It seems "systemctl" is the main way to control programs start at boot, both system services and user ones? Are there other ways for a program to start automatically(either on boot or on whatever trigger) without being shown in "systemctl"?
There are lots of other ways programs can start, for example as part of your X session login script at login-time, or literally any other program that decides to start something. Most stuff that is part of 'the system' and not user software will be handled by systemd so would show up under systemctl or systemctl --user, but there's probably some stuff managed by your desktop environment too.

Quote
When use package managers(such as apt), does it always verify the downloaded contents?(Assuming I'm on a very rogue network) I feel unsafe because I always download from trusted sources through https on Windows, but in Linux world people mainly rely on package managers. Are there anyway I can check that the package manager is actually verify the files before installing anything?

The Linux way is generally much more secure, since most package managers verify against specific GPG signing keys to actually sign the packages, rather than relying simply on the PKI trust like HTTPS, which would allow e.g. a malicious mirror to send you bogus packages. They *also* mostly use https, but if you're doing the GPG verification of the package signatures, it doesn't matter as far as integrity.

It is possible to disable signing, at least in apt, but no distro will do this by default, and you'll probably see warnings every time you run it. You can also enforce this with the apt command option --no-allow-insecure-repositories. See `man apt-secure`.

Quote
I always feel unsettled with "oom-killer"(which is non-exist on Windows), is there a way to completely disable it? (I read tons of pages and no answer for this)

There's not really a good option here...the system is going to be in a pretty unusable state no matter what you do if mallocs are failing. The goal of OOM killer is to kill the process responsible for the memory pressure, not randomly failing mallocs that might not reduce memory pressure at all, and are likely to crash a bunch of other processes on the system. The idea is to avoid one rogue process taking down the whole system. You do have a couple choices though.

vm.oom_kill_allocating_task = 1 does what it says on the tin. Rather than the memory hog, whichever unlucky process needs memory and can't be satisfied is killed.
vm.panic_on_oom =1 also does what it says on the tin. If you'd rather kernel panic in this situation, it's an option.

Fiddling with vm.overcommit_memory should also allow you to avoid the oom killer, but may cause other problems. Settings this to 2 will disable memory overcommit, and therefore not allow the system to get itself into a situation where it has allocated more memory than it actually physically has, so malloc()s fail instead of succeeding and then leading to OOM later. However, this also means that applications can't allocate more virtual memory than the system has physical memory (or at least, not more than an amount determined by vm.overcommit_ratio), and many applications use 'sparse allocation', so you'll potentially be 'wasting' a lot of memory depending on your workload but if you set vm.overcommit_ratio=0 and vm.overcommit_memory=2 then I believe it shouldn't be possible to hit OOM killer, malloc() will just start failing.
73 de VE7XEN
He/Him
 
The following users thanked this post: Logan

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: Some Linux questions from a noob
« Reply #3 on: March 23, 2022, 11:08:42 pm »
A wrong conceptual model. There is no single Linux-based system. There are different distributions based on the same kernel. That’s an entire family of operating systems: like a dozen of major distros, probably a hundred notable ones. To give you an impression, Linux distros timeline on Wikimedia Commons; just zoom out to understand the scale. They are related, but not the same. They share the kernel (Linux) and the general software ecosystem, but in no way it can be expected that everything is the same. It’s critical to understand that it’s just a collection of separate programs working together. While Windows is the same, all of the components come from Microsoft, they are pre-installed, required and you have close to no choice. That’s not true for Linux distros, where there is many different packages you may use to compose your operating system. The reason it’s critical is that you are working with settings and features of specific software packages: there is no single way.

sysctl is a common userspace program offered by multiple software packages, not an inherent part of Linux distros. It acts as a frontend, but all the variables can as well be accessed through sysfs procfs (mounted as “/sys” “/proc/sys”, variables available through normal paths). It’s a convenience tool, but that’s all.

There is no concept of “hidden settings”, so I skip that question. There either are settings or there are not. They are not hidden in any way.

There is no way around setting configuration after boot. First of all most of those settings do not exist before boot. But even if they would be known beforehead, the idea makes no sense. It’s not some kind of Linux’s shortoming — it’s inherent to what they are. It works the same in any operating system. In some you are just not aware of that. Even your CPU microcode is updated at runtime and not stored anywhere. Linux has a concept of kernel parameters, but those are a very tiny subset of configuration that has to be known while the kernel itself is starting, because most of them affect kernle operation before everything else is even properly started. In other words, you call sysctl (or directly modify sysfs) in a running system, because this arises from the very nature of the problem.

systemctl is a part of systemd. It’s just one of such solutions and many distros use different ones. So the answer to that question depends on whether systemd is even used by the distro. If it is, the question is still conceptually wrong. In modern systems there is no such thing as “at boot”. That idea stopped being relevant around the time MS-DOS died. Even in Windows 9x it was no longer a single point in time.(1) Programs, including services, may be started from multiple places by multiple things, depending on what you want to run, when, at which point, depending on which and so on. What you should be asking about is how to run some code or start some service at a specific event. For systemd systems it is used to manage systemd-compatible units (services, timers, targets, mounts, …). But e.g. your desktop environment will likely also have its own session management. Your shell will execute its own startup files. Multiple programs acting as login shells will execute profile-specific startup scripts. xorg will invoke its own. And so on, and so on.

If a package manager verify packages depends on the particular package manager. It should. For example pacman does. And note that package verification is not about encrypting the connection, but about authenticating packages with maintainers’ signatures. Having TLS is not making a mirror a “trusted source”.

Instead of disabling OOM-killer, you should understand its role. There is no reason to be worried about it. Instead of believing horror stories, work a bit on Linux and if OOM-killer ever kills a process, try to understand that entire situation. Aside from it not being frequent in normal system use: you will quickly learn that either it did exactly what you would like it to do, something was misconfigured, or both.


(1) There were at least two stages: booting the system and starting the user environment, each of them having multiple of files executed. Later a third stage has been added: running the command prompt.
« Last Edit: March 24, 2022, 11:16:53 am by golden_labels »
People imagine AI as T1000. What we got so far is glorified T9.
 
The following users thanked this post: Someone, Logan

Offline Halcyon

  • Global Moderator
  • *****
  • Posts: 5681
  • Country: au
Re: Some Linux questions from a noob
« Reply #4 on: March 24, 2022, 04:51:41 am »
Welcome! I'll refrain from answering your questions as there are far smarter people on here who can explain things a lot better than I can.

However I will say this: I shared your sentiments several years ago. Windows 7 was the last version of Windows I used at home (I'm forced to use Windows 10 at work). I made the switch the Linux out of necessity and I haven't looked back. Based on my experience, I offer you this advice:

  • There is no such thing as "the best distro". It seems like so many people in the Linux community have an opinion on why the distro they use is better than others. Some people even go as far as to ridicule others for their choices. Ignore all that noise and try various distros for yourself. Use what works for you and what you feel comfortable with. Manjaro was my daily driver for the last year or so but I've recently made the switch to Pop!_OS (horrible name, but it does what I want without too much dicking around).
  • Stick at it. Over the past 23+ years in my technical career, I tried various flavours of Linux going all the way back to the earlier days of Red Hat. Back then, it was all "too hard" and I was already comfortable with DOS/Windows so I didn't bother. I regret not learning about it earlier on. It can be a steep learning curve, but if you're already familiar with DOS command line, then you're half-way there already. You will come across little niggles and issues, but there are always solutions.
  • Go in with an open mind. Linux is a very different beast to DOS/Windows and some of the concepts might seem very odd at first. There will probably be some changes to your normal workflow, but roll with the punches. You might actually find a better way to do things.
  • Leaving Windows doesn't mean you have to leave your old applications behind. Just because you want to switch to Linux, doesn't mean you have to find replacements to all those applications you love. Many of them have Linux ports which work exactly the same way (including games). But if you do have that one special application that you can't (or don't want to) replace, you can probably run it seamlessly under something like WINE without needing to spin up a VM. Of course, there are some applications (like LibreOffice) which are simply better than some Windows-based offerings. Microsoft Office, is horrible to use these days. LibreOffice is a lot like Microsoft Office 2007 before Microsoft changed that to stupid "ribbon" crap and did away with the traditional menu bar. Items and options in LibreOffice are (mostly) in a logical, easy to find place.
Have fun!
« Last Edit: March 24, 2022, 04:54:51 am by Halcyon »
 
The following users thanked this post: Ranayna, Logan

Offline wilfred

  • Super Contributor
  • ***
  • Posts: 1252
  • Country: au
Re: Some Linux questions from a noob
« Reply #5 on: March 24, 2022, 06:02:40 am »
For a Linux noob coming from a Windows PC I recommend Linux Mint XFCE. I found it to be an easy transition and it looks familiar as can be expected.

Pick a popular distro because that is where it is easiest to find answers to your questions. Mint is among the best in this regards. I have never had problems with upgrades.

Since there isn't a "best" linux distro for everyone you just pick one and dive in. But if you want something kinda similar to the windows look then Mint XFCE is pretty good.

If/when you want to try another one then when setting up your PC for the first time with Linux I suggest making a separate partition (file system)  for your /home that way you can just install a new distro and connect your existing /home filesystem to it. There are caveats with doing this eg you don't want to connect files to a downlevel software in a new distro if the formats changed.
 
The following users thanked this post: Logan

Offline magic

  • Super Contributor
  • ***
  • Posts: 6779
  • Country: pl
Re: Some Linux questions from a noob
« Reply #6 on: March 24, 2022, 06:30:15 am »
sysctl is a common userspace program offered by multiple software packages, not an inherent part of Linux distros. It acts as a frontend, but all the variables can as well be accessed through sysfs (mounted as “/sys”, variables available through normal paths). It’s a convenience tool, but that’s all.
Through procfs at /proc/sys/.
 
The following users thanked this post: golden_labels, Logan

Offline PKTKS

  • Super Contributor
  • ***
  • Posts: 1766
  • Country: br
Re: Some Linux questions from a noob
« Reply #7 on: March 24, 2022, 09:38:57 am »
Hi friends.

"sysctl" seems to be the primary system level settings, right? Are there any "hidden settings" other than sysctl?
(..)
Thank you all.

Well...  this is not like a direct answer...

I read all the comments above... I wil not correct any of them...
but instead just point a few things which in time anyone can see what is correct or wrong about such extensive answers...

1 . https://en.wikipedia.org/wiki/Unix
An extensive culture lives around a rock solid OS meant for real world use of computers as business tool instead of a toaster with cute interface...

2. https://en.wikipedia.org/wiki/UNIX_System_V
Perhaps the most important and critical part of it - THE BOOT sequence controls all system resources - including RUNLEVELS  for a MultiUser system (not to mess w/multitask system) - INIT and TELINIT  are the key tools

3. https://en.wikipedia.org/wiki/BusyBox
A full self contained set of base nix tools with GNU style.  FREE OF UDEV and SYSTEMD and can deploy a self contained minimal in memory system with SINGLE LEVEL OR MULTIUSER LEVEL BOOT complete  self contained (+kernel)

4. https://en.wikipedia.org/wiki/Systemd
A COMPLETE  DIVERTED implementation of everything above
SYSTEMD  distros are not UNIX
They forked from every single rule of best practice on the book.
Why?   RedHat  sponsors systemd development because they sell large volume contracts..
They need a thing very close to mimic MS*** shitty OS so to sell to end costumers ..
It has completed diverted creating a whole new thing..

All concepts are messed in systemd and barely you can control a single user UDEV free boot.  like busybox..  they are not compatible.  You have a shitty automated everything hidden shit..
just like the computer toaster from the 90s...

Google for SYSTEMD FREE DISTROs   (Slackware is the top most better choice)
Google for busybox base distros... ( TinyCORE https://en.wikipedia.org/wiki/Tiny_Core_Linux ) the best choice

BTW i have my own BUSYBOX complete scratch distro
and a whole LFS forked systemd UDEV free system...

All SysV RC based boot

Paul
 :-+


PS/ ohhh and BTW.. there is absolute nothing modern in systemd..
 IT MIMICS THE SAME OLD SHITTY SCHEMA ... introduced by OS/2 in early 90s...
same old shit.. guess who owns and sponsors systemd now ?
« Last Edit: March 24, 2022, 09:48:49 am by PKTKS »
 
The following users thanked this post: Ed.Kloonk, Logan

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: Some Linux questions from a noob
« Reply #8 on: March 24, 2022, 11:17:55 am »
Through procfs at /proc/sys/.
Ooops! Thanks, corrected. 😳
People imagine AI as T1000. What we got so far is glorified T9.
 
The following users thanked this post: Logan

Offline LoganTopic starter

  • Frequent Contributor
  • **
  • Posts: 345
  • Country: us
Re: Some Linux questions from a noob
« Reply #9 on: March 24, 2022, 10:07:21 pm »
Wow,  every reply is so helpful!

Fiddling with vm.overcommit_memory should also allow you to avoid the oom killer, but may cause other problems.
Sure it is, I tested this method a month ago on a Virtual machine and result a disaster. On Ubuntu XFCE, with 2GB RAM, vm.overcommit_memory=2, the desktop GUI cannot even start, with 1.7 GB RAM left.  |O

you will quickly learn that either it did exactly what you would like it to do
Isn't it better to just refuse give more memory when there's no more left, than killing some processes?

What I haven't quote here means they are solved!
Thanks a lot to everyone!
 

Offline ve7xen

  • Super Contributor
  • ***
  • Posts: 1193
  • Country: ca
    • VE7XEN Blog
Re: Some Linux questions from a noob
« Reply #10 on: March 25, 2022, 12:17:22 am »
Isn't it better to just refuse give more memory when there's no more left, than killing some processes?

Most software is not really designed to handle memory allocation failure gracefully. Usually if a malloc() fails it means that the program shuts down, or worse, dereferences the null pointer it was returned and crashes. But the problem is actually trickier than that, since the Linux virtual memory system uses demand paging by default. Virtual memory allocations don't generate physical memory allocations, they just set up the page table. When there is a page fault, that is when the physical memory page is allocated. If this fails because the system has no available memory and can't page anything else out, the kernel has no choice but to kill whatever process is affected (or, in the OOM killer case, it may kill a different process), there is not really even a mechanism to signal this is what's happening to the process, so it could gracefully handle it. It's also possible that the kernel itself could be the one that causes the page fault, and you're back to having to decide what to kill (or just panic the system). Or you can choose to block those processes until somehow memory becomes free, but that probably just leads to a hung system.

When you set vm.overcommit_memory=2 you're not turning demand paging *off*, but you're controlling how much more virtual memory than physical+swap memory is allowed to be allocated (possibly forcing it to 1:1), which makes it harder / impossible for the system to end up in a state where the above can happen (more virtual memory than real memory is allocated, and for whatever reason all the physical pages are locked, and a page fault occurs).

The idea behind the OOM killer is that rather than randomly failing whatever process happens to be unlucky, you would probably rather kill a process that is likely to get the system out of memory pressure You don't want to kill a tiny 'sh' process where you'll end up killing something else in short order anyway, and you probably want to prefer to kill whatever is *causing* the memory pressure by doing a lot of page faulting lately, not the quiet background process. You probably also want to avoid important processes like init and so on.

FWIW, Windows behaves basically the same as above, with 'vm.overcommit_memory=2; vm.overcommit_ratio=0', AFAIK. It will refuse any allocations that would push the committed memory over the actual available memory, but also uses demand paging in its virtual memory system, so it can never get into a state where it needs to kill anything to satisfy a page fault, but when it runs out of memory, it still runs into severe problems. But as you'd expect, software on Linux is designed for Linux norms, so doesn't expect this. On a desktop system just use vm.overcommit_memory=0 like a normal person :p.
73 de VE7XEN
He/Him
 
The following users thanked this post: DiTBho, Logan

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1209
  • Country: pl
Re: Some Linux questions from a noob
« Reply #11 on: March 25, 2022, 12:40:52 am »
Isn't it better to just refuse give more memory when there's no more left, than killing some processes?
It seems better, but only if one already assumes a model in which it is better. If one drops some assumptions, there isn’t much left to support the idea.

Note: I do not want this to be taken as a one-sided defence of the OOM-killer or a claim that it’s perfect. I want to give a wider view of the problem and provide a counterweight to extermely negative view some other people peddle.

To start with: it is incompatible with the idea that a process requests address space and not backed memory.(1) Even on systems, which usually provide backing for the memory on allociation, like Windows, there are exceptions for which handling that case is cumbersome.(2) Yes, you read it right: contrary to common lore, Windows does support overcommit and also kills processes if that fails.(2) It does since forever and the reason you probably never heard about it are of two kinds. First is that it’s rarely observed: the standard libraries have chosen to explicitly commit memory on allocation, while the default and unavoidable overcommit happens only for relatively small number of pages. Second: if anything fails the error message does not speak about “the dreaded OOM-killer”, but is an undecipherable error number for a generic process crashed window. The situation is indistinguishable from any other invalid memory access.

A huge part of “it’s better” is derived from narrowing the view to presence or absence that specific condition. OOM-killer present: bad. OOM-killer not present: butterflies, rainbows and unicorns… except not. There are number of other reasons a process may get killed on memory access, even for an already commited memory. For example the device on which swap resides may experience a read error, RAM may see an unrecoverable ECC error or a bit may randomly switch in an address. Those are rare, but supposedly disastreous shaenigans of OOM-killer are not more probable.

OOM-killer does not kill processes randomly. In the memory leak scenario, which is most often seen cause of the OOM condition, the offending process is being killed. Which, I would argue, is exactly what you want. Otherwise it may be some other process, but ask yourself a question: if there is no OOM-killer, what happens? Some process, “chosen” randomly by being unfortunate to allocate memory at a wrong time, is signaled that it can’t acquire it. At which point it dies, because the way a typical application works makes it impractical to handle memory allocation errors. So… where is the difference? It dies beause it nukes itself or it is being killed by the OOM-killer: the latter having an option to take a more intelligent approach to chosing its victim.

The very specific (but rare) exception are important processes in systems critical for safety. Ones that can be designed to handle such special situations. Yes, this is the actual case where naïvely understood OOM-killer is a bad idea. The crucial part: “naïvely understood”, The catch? That image of the OOM-killer is not aligned with reality for a long, long time. You can adjust OOM-killer score of a process, protecting it from being killed. You can put less important processes into cgroups and limit their memory usage, ensuring there will always be enough memory for your critical programs. Services can be automatically restarted if they die, which is a better idea than merely protecting them: keep in mind there are causes other than memory exhaustion a program may crash and the what you perceive as a critical process also may have a runaway memory leak. And, finally, memory exhaustion is a situation in which your entire system is operating in such a degraded and unstable state, that you want something to get rid of some potential offenders to save the important applications.


(1) It’s implementable, but for average program it’s an insane approach to handle failures with memory provision in the overcommit situation. It’s theoretically doable for very specific kind of programs, but your typical application would never be able to sanely handle such a condition. It requires knowing at which point exactly failure has occured, in which thread, on which operation. The program must be crafted to account for that, which quickly turns into a nightmare for normal business logic.
(2) See VirtualAllocEx, specifically anything referring to invoking it without MEM_COMMIT, and copy-on-write mechanism used for binaries.
People imagine AI as T1000. What we got so far is glorified T9.
 
The following users thanked this post: DiTBho, Logan

Offline westfw

  • Super Contributor
  • ***
  • Posts: 4199
  • Country: us
Re: Some Linux questions from a noob
« Reply #12 on: March 25, 2022, 09:35:19 am »
Quote
"sysctl" seems to be the primary system level settings, right?
I have to ask: why do you, as an "inexperienced linux user", think that you immediately want to leap into fiddling with kernel parameters of the sort modified by sysctl?   Did you have some weird tweak that you used with Windows that you think you'll need to duplicate?

Your "new user" experience is likely go a lot smoother if you can read documentation. watch tutorials, and ask for advice based on have a system that is close to "standard distribution."  I'd think your time would be better spent finding a distribution whose "standard" configuration is close to what you want, and gaining some user-level experience before you dive off the deep end and make customizations beyond those you can make with the user-oriented utilities...
 
The following users thanked this post: Logan

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3915
  • Country: gb
Re: Some Linux questions from a noob
« Reply #13 on: March 25, 2022, 11:19:21 am »
When you are so unlucky that cannot find a dude in Japan who can forward-ship a bloody SH4 board under steroids and the only board that you have around in you lab, the only one that can be used  to compile a whole uclibc-rootfs is ... a broken-down wheelbarrow ...

... so, you can only use the broken-down wheelbarrow as building system, and it only comes with 64MByte of soldered ram, and you have to compile cmake which is written in C++ and during the compiling-process g++ needs to eat up to 800MByte of ram.

Does it sound the recipe of disaster?
Something a "norm" should never think of?

 Yup, no doubt about, but ...

... still Qemu/SH4 sucks (for other reasons) and you can set the kernel parameters to have swap over NFS, up to 2Gbyte over a poor 10/100Mbit/sec link with an average of 8Mbyte/sec, which will take 2 weeks (24h/day) to complete the process

Crazy, it worked!  :D :D :D
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 
The following users thanked this post: Logan

Offline Zipdox

  • Regular Contributor
  • *
  • Posts: 170
  • Country: nl
Re: (solved) Some Linux questions from a noob
« Reply #14 on: March 28, 2022, 11:44:12 am »
Excuse the copypasta. But I think this is important to know.

Quote
I'd just like to interject for a moment. What you're refering to as Linux, is in fact, GNU/Linux, or as I've recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.

Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.

There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine's resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!

There are a lot of components that make up a GNU/Linux distribution. The Linux Kernel itself is just a small component of it, and you shouldn't focus on it too much. There's the init system (typically systemd), package manager (apt, rpm, pacman, etc.), the display server (Xorg or Wayland), there's a sound server (PulseAudio, PipeWire or JACK), a desktop environment (KDE, GNOME, XFCE, Cinnamon, LXDE, LXQt, Mate, etc.) and many more components. Many of these components are interoperable thanks to the freedesktop.org project.

My advice is to not worry about Kernel parameters. Focus on getting familiar with the userspace software first. The userspace software is designed to facilitate the control of your hardware as well.
 
The following users thanked this post: alexanderbrevig


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf