Author Topic: Linux Dependency Black Hole  (Read 2597 times)

0 Members and 1 Guest are viewing this topic.

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Linux Dependency Black Hole
« on: April 24, 2024, 03:53:05 pm »
I used  linux in engineering for 15 years , now I am hobby user in ham radio.
When installing applications, I recall only a few issues in prior years, but after about 2020, the dependency issues started to escalate.

This year I have been installing Software Defined Radio applications with lots of dependency problems, and this week I had to abandon attempts to install
the NXP MCUX IDE on Fedora due to inability to resolve wrong/missing dependencies. And over on the HamRadio section, Metrologist has just documented the output while tryng to install a ham radio application on a debian Pi.
Engineering software for industrial use needs a lifetime of over 15 years, Similarly ham radio sdr software followed the surge of hardware development back around 2014, still current.

I would like to ask a few questions to improve  understanding of how and why dependencies became a problem for users trying to install applications.

1A). When a dependency itself is updated,is it normal/allowable for the original/prior functionality af that utility to be removed?
   That is if a function was previously called with a set of args, and then gives an output, why can't that functionality be retained "forever" along
    with the new additions which come out with a new version number appended to the dependency filename. ?
   Examples that do work "forever" are like the bash utilities, and sqlite stty etc. Why can't the libs be more like that?
   If back compatibility could be made a rule, it would take away 80% of dependency failures, in my experience.

1B). Conversely why does the app installer not throw a warning and proceed anyway to use the updated lib that it finds ?
   In my experience on Fedora , rolling back a dependency is difficult, and also can disable applications already installed.

2).Sometimes when installing a missing dependency, I see it is just a few kB or MB. Why would the App developer not include such small functions into the app?

3). Python3. I am fairly sure that Python 3 versions have always retained ability to run deprecated functions. So why would an app want a specific Python3?

4). Paths: I saw on Fedora that system paths do change over the years. The simple way for a user to fix a "can't find" would be to find and export the PATH or lib LD..  path.
Why can't the app installer be doing a check of the system paths before throwing a "can't find"?
   An example was that NXP app MCUXpresso say just "Can't find Python3.8" doesn't say the path it used. When I installed it in ~/.penv and exported that       path, still ""Can't find Python3.8"

 The way this is going, I can see that soon, we will have to install apps on their dedicated complete o/.s
 And that is what I am now trying to be able to solve present problem.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #1 on: April 24, 2024, 04:54:39 pm »
1A). When a dependency itself is updated,is it normal/allowable for the original/prior functionality af that utility to be removed?
If the major version of the library changes, then entire API may be different.

A good example here is libusb. Version 1 and version 2 of the library use entirely different approach to the API. They did not change it "just because", they changed it to support higher performance.
Maintaining APIs forever is a lot of work and nobody wants to do it for free.

   If back compatibility could be made a rule, it would take away 80% of dependency failures, in my experience.
Are you willing to pay for it?


1B). Conversely why does the app installer not throw a warning and proceed anyway to use the updated lib that it finds ?
   In my experience on Fedora , rolling back a dependency is difficult, and also can disable applications already installed.
Major versions of the libraries are usually not installed automatically. And multiple major versions can usually coexist. Minor versions rarely break APIs. And when they do, it is not on purpose.

Sometimes things break because application authors used undocumented APIs. And this is on the application authors.

2).Sometimes when installing a missing dependency, I see it is just a few kB or MB. Why would the App developer not include such small functions into the app?
Because then you will have to maintain the code. Plus sometimes there are license restrictions.

3). Python3. I am fairly sure that Python 3 versions have always retained ability to run deprecated functions. So why would an app want a specific Python3?
Likely because the app is written poorly.

   An example was that NXP app MCUXpresso say just "Can't find Python3.8"
MCUXpresso is a cost center for NXP, not a profit center. It gets corresponding amount of attention. All vendor tools from all IC vendors are bad. They cost money and don't directly contribute to profits. As long as they mostly work, it is fine. FPGA IDEs are some of the worst software ever created. Yet there are no alternatives and what are you going to do? Not use that FPGA?

Your questions are too abstract and actual details would not be universal and would depend on the specific cases.
« Last Edit: April 24, 2024, 04:58:28 pm by ataradov »
Alex
 
The following users thanked this post: Karel, bpiphany, quince

Online coppice

  • Super Contributor
  • ***
  • Posts: 8837
  • Country: gb
Re: Linux Dependency Black Hole
« Reply #2 on: April 24, 2024, 05:21:29 pm »
The current version of the ITU G.1050 spec is at https://www.itu.int/rec/T-REC-G.1050-201607-I/en . Its from 2016. Not exactly new, but not exactly ancient. Try using the software provided there. It requires numerous things, mostly open source, but it requires specific revisions of many of those things. It looks like within 2 years of the publication of that spec you couldn't find some of the necessary versions on the internet, and now you can find almost none. So much for the notion that with open source you just publish and somewhere on the internet it will be effectively archived. :)

Demanding too much backwards compatibility is a bad thing. You learn a lot about how to design something well by getting a first weak pass out the door. If you can't be incompatible you are locking in a lousy design forever. You do need to keep the old stuff relevant for a very long time, though. ataradov referenced libusb totally breaking compatibility. gstreamer and other software has been through the same thing. Some of those things struggled to move people on to the better solution, but that did discipline them to keep the old version usable and relevant. Many people are too keen to let the old stuff rot when they are able to carry a good number of people forward to the new version.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #3 on: April 24, 2024, 05:36:36 pm »
I find it hard to complain and demand people keep the old stuff up to date given that all those things are developed for free. Bigger projects like Gstreamer have some monetary support, but smaller libraries don't. Even though they are often at the foundation of everything. And even when there is monetary support, people allocating the money usually have some influence on how it is spent, and rarely there is incentive to spend it on keeping the old stuff up to date.

If anything, a more reasonable demand is to have the applications that use those libraries be kept up to date. It is strange to ask authors of the libraries to do more work because you don't want to do the work yourself.
Alex
 
The following users thanked this post: Karel, bpiphany, SiliconWizard

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #4 on: April 24, 2024, 05:37:49 pm »
Thanks ataradov and coppice
Yes I understand this is all FOSS. I support linux and FOSS and believe the concept has been of significant benefit over the years.
DSP is an outstanding example in my area of interest, another is image processing.

I am pointing out that the problems with linux dependencies is getting worse, to the level of "reasonable" users not being able to install,
and asking why.
 

Offline Zucca

  • Supporter
  • ****
  • Posts: 4380
  • Country: it
  • EE meid in Itali
Re: Linux Dependency Black Hole
« Reply #5 on: April 24, 2024, 05:55:02 pm »
The current version of the ITU G.1050 spec is at https://www.itu.int/rec/T-REC-G.1050-201607-I/en . Its from 2016

would be worth to try it with some old Linux distro from 2016 in a VM?
Can't know what you don't love. St. Augustine
Can't love what you don't know. Zucca
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #6 on: April 24, 2024, 05:56:41 pm »
It depends on what you are installing. Most libraries and software in the distribution repositories are usually well matched, since package maintainers do that work.

If you are installing some random software you download in *.tar.gz, then you are on your own. If you are installing some proprietary software, you need to make sure your system matches the requirements.  Usually those things target recent version of mainstream distributions.

I would not say it is getting worse, if anything, it is getting better. But for things to get better, incompatibilities are introduced, so all the old software that was not updated in decades is less compatible. The new software benefits from improvements though.

Having things improve while staying the same is the perfect example of incompatible requirements.
Alex
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8837
  • Country: gb
Re: Linux Dependency Black Hole
« Reply #7 on: April 24, 2024, 06:02:07 pm »
If anything, a more reasonable demand is to have the applications that use those libraries be kept up to date. It is strange to ask authors of the libraries to do more work because you don't want to do the work yourself.
I'd be happy if they just kept the old releases available, even if various bit rot issues stop them compiling out of the box with other current stuff. I would probably have got the G.1050 code going if they had. As it was, the number of issues I would have needed to sort out looked overwhelming.

I recently had reason to want to build a 20 year old version of GCC. That was easily available, but various other things of the same vintage were not, and the effort to build it was just too much. So, I got a 20 year old installer for Fedora that contained the version of GCC I wanted (Fedora 3 I think), and tried to install that on a spare machine. It seems that was the changeover from IDE to SATA, and I couldn't find either old IDE based hardware or old SATA based hardware where the disk drivers in that version of Fedora were happy. I gave up on that too.
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 8837
  • Country: gb
Re: Linux Dependency Black Hole
« Reply #8 on: April 24, 2024, 06:03:35 pm »
The current version of the ITU G.1050 spec is at https://www.itu.int/rec/T-REC-G.1050-201607-I/en . Its from 2016

would be worth to try it with some old Linux distro from 2016 in a VM?
They require a bunch of things not in the distros, and the relevant old revisions of some of those things are no longer available on line.
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #9 on: April 24, 2024, 06:09:35 pm »
It depends on what you are installing. Most libraries and software in the distribution repositories are usually well matched, since package maintainers do that work.

If you are installing some random software you download in *.tar.gz, then you are on your own. If you are installing some proprietary software, you need to make sure your system matches the requirements.  Usually those things target recent version of mainstream distributions.

I would not say it is getting worse, if anything, it is getting better. But for things to get better, incompatibilities are introduced, so all the old software that was not updated in decades is less compatible. The new software benefits from improvements though.

Having things improve while staying the same is the perfect example of incompatible requirements.
My experience over 18 years is that the Fedora dependency problem has become not better, a lot worse since about 2020, but that could be partly  in my usage changing from day job to ham radio
And yes,the packages installed from repositories by dnf usually had commendably few or no dependency problems
« Last Edit: April 24, 2024, 06:11:33 pm by mag_therm »
 

Offline xvr

  • Frequent Contributor
  • **
  • Posts: 303
  • Country: ie
    • LinkedIn
Re: Linux Dependency Black Hole
« Reply #10 on: April 24, 2024, 06:21:42 pm »
More and more developers use Docker for distribution of these products. This could be a solution for dependency problem.
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #11 on: April 24, 2024, 06:47:40 pm »
More and more developers use Docker for distribution of these products. This could be a solution for dependency problem.
I have not had an app so far  which could have used Docker. I have installed two of .AppImage. These were with  no problems, also I read that AppImage will work across all linux variants too.
maybe that is the way forward, but not many available yet.
 

Online DavidAlfa

  • Super Contributor
  • ***
  • Posts: 6039
  • Country: es
Re: Linux Dependency Black Hole
« Reply #12 on: April 24, 2024, 09:09:03 pm »
Ahh linux. Yep. FOSS heaven (Fu**ed Open Source Software).
Last day I removed OpenOCD to manually install a newer one. Got asked to remove no longer necessary packages. Sure!

It completely uninstalled my desktop, leaving me dead in the shell.
Sadly (Or luckily) this already happened few years back, so I quickly checked if gnome had been eradicated and got it back within 10 minutes.

Sorry I can't answer your questions. So many "whys" on my head too. So freaking no sense in so many ways.
Basics - nice. Anything else - pain and time wasting.
Maybe some people love browsing forums for 2 hours everytime they need to fix or make something, I don't.
I guess that's a golden-egg-laying chicken in IT, it's a maintenance hell, hours and hours, $$$.
« Last Edit: April 24, 2024, 09:17:09 pm by DavidAlfa »
Hantek DSO2x1x            Drive        FAQ          DON'T BUY HANTEK! (Aka HALF-MADE)
Stm32 Soldering FW      Forum      Github      Donate
 

Online DimitriP

  • Super Contributor
  • ***
  • Posts: 1355
  • Country: us
  • "Best practices" are best not practiced.© Dimitri
Re: Linux Dependency Black Hole
« Reply #13 on: April 24, 2024, 09:32:08 pm »
More and more developers use Docker for distribution of these products. This could be a solution for dependency problem.

Depending on docker to avoid depenency issues might be an issue depending on ...docker dependencies...
   If three 100  Ohm resistors are connected in parallel, and in series with a 200 Ohm resistor, how many resistors do you have? 
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #14 on: April 25, 2024, 12:28:58 am »
I ran a grep on an application tar.gz to see how the system dependencies are defined.
here is an snippet from the valgrind file  in the tar.gz :

QQ-valgrind.linux.supp:148:   obj:/usr/lib/*/libpulse*
QQ-valgrind.linux.supp:169:   obj:/usr/lib/*/libQt5Gui*
QQ-valgrind.linux.supp:182:   obj:/usr/lib/*/libfontconfig*.so.1.8.0
QQ-valgrind.linux.supp:265:   obj:/usr/lib/*/libfreetype*
QQ-valgrind.linux.supp:272:   obj:*/lib/*/libdbus*
QQ-valgrind.linux.supp:279:   obj:/usr/lib/*/libcairo*

So it seems in the tar.gz, dependency versions are not explicit, and it seems they would be from the whatever system that the tar.gz was made on.
The user who built from tar.gz would not have dependency version problems.

Then opened the .rpm (for Fedora )
Here the files are mostly .bin  and there is a /bin/valgrind on linux.
I don't know if valgrind is run again on user machine  during installation.
I suspect not because we have the dependency version problem.

So we may be better, if having a big dependency mismatch,  to build from tar.gz rather than trying to use the .rpm .deb etc packages.
Can anybody familiar with this comment?
Thanks !
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #15 on: April 25, 2024, 12:34:26 am »
The user who built from tar.gz would not have dependency version problems.
This is not correct in general. There is no common way to define dependencies in tar.gz outside of autotools stuff, which rarely covers everything. It is mostly just basic checks.

You will have all sorts of issue if you build stuff from the source and let it be installed in your system outside of the package manager.

So we may be better, if having a big dependency mismatch,  to build from tar.gz rather than trying to use the .rpm .deb etc packages.
This is the worst idea ever. If I absolutely need to install something from the source, I at the very least specify --prefix=/opt, so it does not mess up the main file system. And then you still rely on package author not being an idiot, which is not a given.
« Last Edit: April 25, 2024, 12:35:58 am by ataradov »
Alex
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #16 on: April 25, 2024, 12:58:48 am »
Well I have built stuff from tar.gz, in fact recently too on two for the SDR.
Always in the ~ as I recall.
I presume many others around the world have too.

And please don't take all this so emotionally, its only a hobby for me.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #17 on: April 25, 2024, 01:06:09 am »
I don't take it emotionally. I don't really care what you do to your system. But what bothers me is when people go on the forums and complain that "Linux sux" when they mangled their system by installing random stuff from tar.gz.

Installing into "~" is fine. Installing into "/" is just asking for trouble.

But it also depends on the type of a system. If this is some random RPi that can be re-imaged when necessary, then who cares. If this is your primary system, not destroying it is a good call.
Alex
 
The following users thanked this post: Karel, bpiphany

Offline linux-works

  • Super Contributor
  • ***
  • Posts: 2000
  • Country: us
    • netstuff
Re: Linux Dependency Black Hole
« Reply #18 on: April 25, 2024, 01:12:13 am »
appImage is a large image but it WORKS and that's great. for many products, I do just take the easy way out and do the app image thing.  cura is one of them and open-scad is another.  both too large and hard to build/install so I just take the appimage route.  its not ideal but it works.

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #19 on: April 25, 2024, 01:32:00 am »
I don't take it emotionally. I don't really care what you do to your system. But what bothers me is when people go on the forums and complain that "Linux sux" when they mangled their system by installing random stuff from tar.gz.

Installing into "~" is fine. Installing into "/" is just asking for trouble.

But it also depends on the type of a system. If this is some random RPi that can be re-imaged when necessary, then who cares. If this is your primary system, not destroying it is a good call.
As I wrote earlier, I am staunch linux user of 18 years, having earned my living from it and I like messing about with it now in ham radio.
from my o/p, see, I am not complaining about linux; I am asking questions about the increasing problem of dependencies.

I am retired with a few left over computers .
For risky trial stuff like trying this MCUXpresso I use a standby computer on Fedora 37 now.
On that I don't care if reloading Fedora as I like to keep it up to date

Linux is fairly fault tolerant in my experience, but when clients would ask to fix problems, I would always recommend to just reload the whole thing.
Because it is difficult or impossible to find out what somebody else did especially if they had root.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #20 on: April 25, 2024, 01:47:23 am »
I am asking questions about the increasing problem of dependencies.
You have not actually clarified what "problem of dependencies" you have in mind. I've been using Linux since 2002, and as the only OS since 2007, and in that time I only see things improving over time (discounting whatever Canonical is doing).

So, if you can provide specific examples, there may be something to discuss. Otherwise it is just way too abstract.
Alex
 
The following users thanked this post: bpiphany

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #21 on: April 25, 2024, 04:17:04 am »
If you don't want to get into dependency hell, just stick to apt and what is in the repositories for the release you installed. Package maintainers did that work for you. To make a package for Debian implies you have full knowledge about all the Debian policies, etc, and to learn it means a lot of work.

I also think things are getting worse. The reason are all those pesky new/better/wonderful ways to install software... that are full of shit. They don't want to learn how to create debian packages, or think they will make some money going that way, or fame/glory/whatever, so they have to invent the wheel again and again. But, while apt is almost the perfect wheel, those new wheels, well, not so much.

AppImage can work. I don't like it however. Docker could be useful if you have servers and want/need to migrate not just the software, but also the contents, from one server to another. What I can't swallow is pip. Use pip to install any python thing, and odds are things will be messed up sooner rather than later, because you have now python things installed, unknown to apt. Pip would like to be half the good apt is. And you'll probably will forget you installed that thing with pip. Then a rough ride will begin.

tar.gz packages? To me, that means source code to be built. If you do so, you'll need to learn previously about the dependencies, and build that dependencies first. And the dependencies of the dependencies. And so on an on... until you have built all what's needed. All should go into a separate partition. There used to be a /local directory just for that partition to be mounted there.  That way you can have different versions of libraries, etc, needed to do the work, and those binaries can be easily preserved if you want to upgrade the rest of the system. I used to do that only for versions of software with new features/fixes that I needed, and I get it back to be installed with apt as soon as the corresponding debian package for "stable" is released.

But, often, the main reason is just user ignorance and unwillingness to read any documentation. Debian has quite good guides, and anyone that did read about the sources.list file, would know that metrologist, in that other thread, is getting messages about hits from two different repositories, one of them for bookworm, the other one for jessie. So, no wonder he got a dependency hell. He just messed things, first, choosing to do an upgrade when the easiest way, not having anything functional on that system, would have been to go for a clean install. Then he modified /etc/apt/sources.list without knowing what he was doing (because he couldn't be bothered to read the guide). Then he complains "Linux sucks". LOL.

Easiest solution: clean install with just the defaults. After that, use just APT to install any packages. If some software couldn't be installed with APT, it just means that software isn't mature enough, and it should be avoided like the plague, until you know what you are doing.

Edit: if you are going to do a Debian install downloading the packages from Internet, just use an ethernet cable. Wifi won't work... because the firmware for the wifi hardware isn't free. You'll have to install that firmware with apt after install is done. It's just the Debian way, and Canonical profiteered from that greatly. Again, anyone that did read the guide would know it... he would even know that an image including non-free firmware, with kernel modules for wifi hardware, is also available, but very few people seems to be willing to read anything nowadays
« Last Edit: April 25, 2024, 04:25:14 am by tatel »
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #22 on: April 25, 2024, 12:36:26 pm »
Thanks atardov and tatel
Here is what I propose to do for my problem which is NXP MCUXpresso:

The usb installer from linuxshoponline for Debian 12.4 is due to arrive soon.
I will buy another ssd sata today , then dd /dev/zero it using fedora, then  set it up in rack as the only media
install debian per installer instruction on the usb
install pyenv which I understand is only available from github as  ~./pyenv
Add LD_LIBRARY_PATH pointing to ~./pyenv...python3.8

then start install on debian 12.4  of the MCUXpressoide....deb.bin   from NXP . I will try to find any NXP instructions for that
If significant problems I will start new thread on eevblog with scrots etc  requesting assistance.
 

Offline xvr

  • Frequent Contributor
  • **
  • Posts: 303
  • Country: ie
    • LinkedIn
Re: Linux Dependency Black Hole
« Reply #23 on: April 25, 2024, 01:28:37 pm »
> I will try to find any NXP instructions for that

This is not a problem with bad Linux compatibility design, but problem with bad customer support in NXP :(

I remember some old story with some kind of software thing from Siemence for programming their hardware (now I don’t remember what kind of hardware it was)
Some company bought a lot of hardware, and when they started programming, a problem arose - no one in the company could install the software package provided by Siemence (it was Windows, by the way).
After several days of unsuccessful battles with installers, a field support engineer from Siemens is called. It comes with a laptop with the software installed, just to show how easy it is to do :)
They ask him to install software on their computer. He started and give up after 5 hours of unsuccessful attempts.
Then the team lead (after consultation with Siemence headquarters) makes a decision - he confiscates the field engineer’s laptop with the software installed and says that he can take it back after the software will works on their server.
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #24 on: April 25, 2024, 01:51:07 pm »
Hi xvr,
Yes I intend  contact NXP if I have any problems with Debian 12.4 installation as that is its intended o/s.
Customer ? well I'm retired ham who bought a $40 Dev Board, that will carry some weight!

I used to program Siemens PLCs a long time ago. I regarded those PLC and the "IDE" as better, more advanced than the USA & Japan PLCs.

Any alternative for IDE on LPCX4367 if I have no luck with the MCUXpresso ?
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #25 on: April 25, 2024, 02:00:05 pm »
MCUXpresso officially supports only Ubuntu 20.04.2 LTS / 22.04 LTS. They can't guarantee operation on Debian or any other OSes. It does not mean it will not work on other variants, but if you try, you are expected to figure it out on your own. There is no reason support should waste time trying to figure it out.
Alex
 

Offline shapirus

  • Super Contributor
  • ***
  • Posts: 1575
  • Country: ua
Re: Linux Dependency Black Hole
« Reply #26 on: April 25, 2024, 02:17:05 pm »
I also think things are getting worse. The reason are all those pesky new/better/wonderful ways to install software... that are full of shit. They don't want to learn how to create debian packages, or think they will make some money going that way, or fame/glory/whatever, so they have to invent the wheel again and again. But, while apt is almost the perfect wheel, those new wheels, well, not so much.
Apt is almost perfect, yes, but only as long as you install the software available in the distribution's repositories and, ideally, don't mix the distro releases (that can be done too, but requires experience and understanding).

The core issue with, almost universally, linux package management, is dynamic linking. Once you want to install something that was linked with a library that is not (or no longer is) a part of your distribution, you're screwed. That's where hell begins. It's good if there's a source .deb package: you can usually build a binary package from it using dpkg-buildpackage, and it will be linked against your available libs, and will install just fine. It's worse when you have plain sources. It's terrible when you have just binaries linked against libs in a 5 year old distro.

This is where packaging systems like AppImage can be helpful: they contain statically built binaries with no external dependencies. They will still work in 20 years just like they do today (provided it's the same CPU architecture), unless they depend on some specific kernel calls, which is not typical for general userspace apps. Good luck trying to run anything built for any Linux distro in 2005 today, unless it's a static binary.

I can see why software developers want to bundle their binaries as AppImage: spending time to learn how to build a gazillion different packages for a gazillion of different distros, not to mention another gazillion of their releases, would be insane. Why do that if you can build a universal package (actually several: by the number of target CPU architectures) that will be guaranteed to work on any linux distro?

Linux packaging is perfect as long as you stay strictly within a given distribution's ecosystem. Once you try to take a step outside (and you will, unless you have a purely academical interest in it), it becomes a nightmare.
 
The following users thanked this post: bpiphany

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #27 on: April 25, 2024, 04:02:35 pm »
Apt is almost perfect, yes, but only as long as you install the software available in the distribution's repositories and, ideally, don't mix the distro releases (that can be done too, but requires experience and understanding).

(...)

Linux packaging is perfect as long as you stay strictly within a given distribution's ecosystem.

I agree. However, I have to say, once a package for Debian is available, one can expect it to be incorporated by the gazillion distros that follow Debian. I wouldn't like to have all my applications compiled statically, just because wasted space. But, yeah, good for just a couple of things.

Quote
Once you try to take a step outside (and you will, unless you have a purely academical interest in it), it becomes a nightmare.

I understand what you say, but can't really agree here. I'm used to build what is not in the repositories. I find there's a couple of advantages coming from that, like optimizing for you processor and maintaining control over your system. But, as said, I get anything back to Debian's ecosystem ASAP. Not worth the hassle, usually. Can't really understand what you mean about "purely academical interest". I did my first Linux install on a brand new PowerBook G3 in the past century. Since then, I know quite a few people that doesn't need anything outside Debian's ecosystem to do his daily work.

Now, to remain on point with OP, I see that pyenv is a tool to manage different python versions. MCUXpresso is a software developer suite from a  vendor. None of these are SDR software. I was under the impression that's was the target, to play with SDR software. SDR's not my thing, so I don't know. But I think there are quite a few official Debian SDR packages that could perhaps be useful?

https://blends.debian.org/hamradio/tasks/sdr

 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #28 on: April 25, 2024, 04:27:17 pm »
Hi tatel,
The install of MCUXpresso was failing on "can't find Python3.8"  So one of my questions was "Why does it specifically need 3.8, when higher versions of python are already installed in their correct paths."
So I had to install pyenv which allows download of any python3 version.

What happened today, there is a new version of MCUX .deb.bin in NXP along with updated manuals ( I had downloaded the old version about 3 days ago )  I  have just downloaded new version now.
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #29 on: April 25, 2024, 04:58:53 pm »
Well, I would say, if python >3.8 could be used, then the "vendor" is to blame for bad packaging.

I guess you are looking to use some specific software for your own reasons.

If that's not the case, then I would look at the hamradio task. You can click the names on the previously given link to see a description. Perhaps there's enough stuff there to make you jump over a couple steps. E.g, I see there's a fork, jtdx, with a high enough version number, of that WSJT-X thing that is driven metrologist nuts on that other thread. I think the stuff in that link it's worth a look. It could all be installed by just ticking the "hamradio" task during the install.

In my experience, it's better to stick with pure FOSS than to try and use some software coming from a for-profit source. Many times, it's not as easy and doesn't have all the features of privative software. FreeCAD comes to mind almost immediately. But in the long run I think that's often worth the hassle; it will not vanish when the vendor gets bought or goes out of business. Particularly if you aren't going to use it professionally. OTOH, GRASS GIS was the thing that got me into Linux: it was able to do much more than any privative GIS software we could afford then. YMMV.

 

Offline JohanH

  • Frequent Contributor
  • **
  • Posts: 638
  • Country: fi
Re: Linux Dependency Black Hole
« Reply #30 on: April 25, 2024, 05:04:23 pm »

The install of MCUXpresso was failing on "can't find Python3.8"  So one of my questions was "Why does it specifically need 3.8, when higher versions of

I've had success with some proprietary software by creating a symlink to the newer installed version in the place where the software is looking for the version that it supports. Your mileage may vary...
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #31 on: April 25, 2024, 05:08:19 pm »

The install of MCUXpresso was failing on "can't find Python3.8"  So one of my questions was "Why does it specifically need 3.8, when higher versions of

I've had success with some proprietary software by creating a symlink to the newer installed version in the place where the software is looking for the version that it supports. Your mileage may vary...

Yep.

But, if the vendor isn't able to do that... well, that speaks volumes
 

Offline xvr

  • Frequent Contributor
  • **
  • Posts: 303
  • Country: ie
    • LinkedIn
Re: Linux Dependency Black Hole
« Reply #32 on: April 25, 2024, 05:16:18 pm »
> What happened today, there is a new version of MCUX .deb.bin in NXP along with updated manuals

Do you known exact distributive and version of Linux on which MCUX could be run?
If yes:
1. Install Docker on your machine
2. Run known-to-work Linux version inside it
3. Install  MCUX .deb.bin inside it
Profit ...
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11385
  • Country: us
    • Personal site
Re: Linux Dependency Black Hole
« Reply #33 on: April 25, 2024, 05:53:59 pm »
"Why does it specifically need 3.8, when higher versions of python are already installed in their correct paths."
Because their QA team tested it with this version of Python. And their support teams are ready to assist with this version of Python. They are not focusing on broad OS support, they are focusing on minimizing support tickets.

This is the version packaged with the specified supported OSes. This is how commercial software with official support works. Open software can afford to make some assumptions about compatibility, since there is no explicit expectation of support.
Alex
 
The following users thanked this post: Someone, Karel, SiliconWizard

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #34 on: April 25, 2024, 07:35:05 pm »
 OK.... On Fedora 37, the new version of  MCUX installer is now running though to completion (except it erases the bin after it finishes, ) leaving the problems
There are a few dependencies not found.
I have been able to clear a few, and there are about  8 or so still not found.
There are two reasons I think
   Fedora name suffix does not match
  Fedora path does not match
libncurses5, libncursesw5, dfu-util, libusb-1.0-0-dev

The problem with python3.8 was not that the libfile was" not found," it wants python3.8 to be active.
After I realized that, I was able to use pyenv to set 3.8 as system active

It is not worthwhile continuing with the dependency problems.

I am expecting the Debian installer to arrive.
If Debian does not work I will revert to Ubuntu
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #35 on: April 25, 2024, 08:21:40 pm »
Hi tatel, thanks for the debian link.
I have installed some of those from Fedora32 repo, and some others from git tars. I recall there were  dependency issues on the tars, but eventually solved one by one.

the reason I need NXP MCUX IDE is for my new HackRF1 ( FOSH/FOSS)  transceiver.it has a NXP LPC series MCU.
I discovered that while the receiver is very stable, the frequency setting is an approximation. Probably due the the developers not being hams, the settings are neither monotonic nor accurate
I want to try to improve the math approximations  in the firmware for my own use, hence the NXP IDE
Regards
 
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #36 on: April 25, 2024, 09:55:51 pm »
I'm not a ham, either.

It seems there's at least one hackRf related package official in debian, with command-line tools. I'm sure you could do your developments using pure FOSS tools, but you probably are used to some different thing? That would be understandable.

It there could also be some tool to convert packages from one distro to another, but IIRC, "alien" works the other way: RedHat->Debian

Ataradov said the software you are looking to use is for Ubuntu 20-22, and out of that, you are on your own. I think he's right. So, having Ubuntu installed (brrllrrll) would be the easier way. Short of that, it will probablybe  easier get it working under Debian that under Fedora.

Good luck
« Last Edit: April 25, 2024, 09:58:24 pm by tatel »
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #37 on: April 26, 2024, 02:29:18 pm »
Hi tatel,

I wanted to get MCUX on Fedora because I thought (perhaps in ignorance, and should have asked)
that it would be the proper and maybe only way to access and update the firmware on the LPC4320 MCU in the HackRF1.
The firmware is updated like this:
 hackrf_spiflash -w hackrf_one_usb.bin

MCUXpresso  has been a nuisance with dependencies, and it is going to be another nuisance working outside of Fedora,
as everything here including all computers and the ham radio station runs on Fedora.

I would like to ask " How would a pro developer do this, do they just accept setting up a dedicated o/s to work on a particular MCU? "

I have started to examine the HackRF firmware math in C that sets up the frequencies for the Si5351 clock.
Not sure, but I can see two likely areas where approximations are made. one is integer divisions by prime numbers, and one is a frequency homing function that iterates until a result is in a tolerance.
 It is likely that approximations were made because the HackRF1 covers a very wide frequency range of 1 MHz  ~ 6000MHz.

So I have intention of building a firmware version for myself with a limited frequency range ,  say  1 ~ 200 MHz and better accuracy to 1 Hz if possible.
The reason for accuracy for hams on HF: I have been collecting data for ionospheric research ( HamSci) requiring ~ 10 milliHertz resolution and also the digital modes are being used with frequency lock to GPSDO , ( and new mode FSTW4)

I have corresponded with github GreatScott, with confirmation of the frequency selection accuracy limits and that there is no feedback of the actual frequency calculated..
« Last Edit: April 26, 2024, 02:40:20 pm by mag_therm »
 

Offline macboy

  • Super Contributor
  • ***
  • Posts: 2270
  • Country: ca
Re: Linux Dependency Black Hole
« Reply #38 on: April 26, 2024, 03:48:59 pm »
I am not a Linux dependency expert, but I have learned a few things over the years. First: use a good distro. The various libraries included in a distro release are tested to work together. So if something needs XYZlib, then when you install XYZlib using the distro's tools (apt, etc.), it will not break the system. Unfortunately, this version of XYZlib is probably more than a year old - maybe a few years - and might not satisfy the requirements of the tool you are trying to install or build.

You can try a chroot or one of the many variations of this. This allows you to install an complete alternate root filesystem, which still runs on the present kernel, but is isolated from the system's main root filesystem. You could install the same system (but different packages/libraries), or a different release of the O/S (Debian 10, on a system running Debian 12), or a different distro (Arch Linux on Debian system), or even a different architecture (Debian for armhf on a Debian x64 system), in combination with qemu. The point of doing this, is that you can install a minimal O/S, then install only the dependencies for the application in question, then try to resolve the dependency issues. Since the number of installed packages is much smaller, the dependencies to resolve will be fewer, hopefully. This also avoids breaking the main system by installing some different version of something outside of the distro package system.

Alternately, you can try building the application with static linkage, so that it does not depend on installed shared libraries at all. Instead, all of the shared library code will be built into the executables. The dependencies are installed only for compilation (e.g. XYZlib-dev). You may/will choose to do this inside a chroot as well, to isolate the installation of the many build tools, dev libraries, etc. from the main Linux system. After the app is built, installed, and tested, you could delete (or archive) the chroot to save space.  The static-linked binaries will be bigger, sometimes much bigger, but as a bonus they usually run faster as well.
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #39 on: April 26, 2024, 03:55:54 pm »
Hi.

Please don't take too seriously what i'm going to say, because I don't have any hamradio/MCUexpresso experience. To me expresso is a good way to make coffee.

You have hackrf_spiflash and other related command line tools on package "hackrf" officially supported on Debian, and I would bet my last euro they don't use that expresso thing to build that tools. Don't know if there is some firmware available. You'd have to look for it, or look at how firmware is loaded and to write your own firmware accordingly.

Most FOSS software is built with GCC and related tools with the usual letany "configure-make-(become root)-make install". That means there will already be Makefiles to build the hackrf utilities, the firmware loader, etc. You probably could modify and use these Makefiles. That way you can have a good peep at how things are done in the FOSS world when for-profit companies are not involved.

I would look at source packages for hackrf and anything you could be interested in. Probably you'll find quite a good base to start from. If you are not used to GCC way of things, perhaps you'll find easier to use another thing and avoid learning that FOSS way to build.

I can say that having knowledge about the GCC way could very well be worth the hassle. I would say most of linux software is built that way, and you could always get from the repositories a GCC ecosystem to be installed in your system without any dependencies problems whatsoever. Look for the Debian guides; if you start from debian source packages, you could easily even find that most of the work to produce debian packages is already done. Not that you need to create any debian packages to use the binaries you just built with GCC.

Best wishes
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #40 on: April 26, 2024, 04:18:26 pm »
Another thing, developing tools, libraries, etc, take a big chunk of hard disk space, so you could find your Pi doesn't have that much space available. You could perhaps add some hardware, or do the development on a regular PC. In that case you would be creating binaries for an architecture in a machine with another, different architecture. To do that, you should look at "cross-compiling"

It basically means you first built a toolchain able to do what you want. That's usually how OpenWRT is built, and I'm pretty sure it will be the same for the Pi.
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #41 on: April 26, 2024, 04:22:08 pm »
Hi tatel, That is interesting.
Yes I have downloaded the firmware  C files from git  and have been examining them. But I dont have C IDE or debug, (And not used C for 36 years, so a lot to learn)
So thought MCUX and a devel board  would give me everything including debug.

Also I have been thinking today, with the ease of uploading the firmware, and as I only need < 10 frequencies I could just build 10 firmwares each with a fixed dataset for the Si5351.
A little UI and bash to replace firmware on frequency change , and with the latest Gnuradio ( python based)  which allows users to build their own DSP,     could be an easy solution at least initially
« Last Edit: April 26, 2024, 04:37:40 pm by mag_therm »
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #42 on: April 26, 2024, 04:28:52 pm »
Hi tatel, I don't yet have an R Pi here, I use industrial computers running on 13.8V DC , the main one is Aaeon 6651 https://www.aaeon.ai/en/product/detail/boxer-6641
No problem with capacity.
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #43 on: April 26, 2024, 06:58:01 pm »
Hi tatel, That is interesting.
Yes I have downloaded the firmware  C files from git  and have been examining them. But I dont have C IDE or debug, (And not used C for 36 years, so a lot to learn)
So thought MCUX and a devel board  would give me everything including debug.

Also I have been thinking today, with the ease of uploading the firmware, and as I only need < 10 frequencies I could just build 10 firmwares each with a fixed dataset for the Si5351.
A little UI and bash to replace firmware on frequency change , and with the latest Gnuradio ( python based)  which allows users to build their own DSP,     could be an easy solution at least initially


Yeah, even not being a ham, that looks good. Using GNU-whatever looks right to any linux talib  >:D

Using an IDE to build just a firmware binary file looks like using a battleship to go after a trawler to me. YMMV. You could have a look at this:
https://opensource.com/article/19/7/introduction-gnu-autotools.

Probably even a one-liner calling gcc would be enough for that.

Debugger: https://sourceware.org/gdb/

If you are so inclined and absolutely need to have an IDE, you probably could use Eclipse anyway, but then there you go again installing something not in the Debian repositories?
https://wiki.debian.org/Eclipse
https://www.eclipse.org/downloads/packages/release/helios/m7/eclipse-ide-linux-developers

Hi tatel, I don't yet have an R Pi here, I use industrial computers running on 13.8V DC , the main one is Aaeon 6651 https://www.aaeon.ai/en/product/detail/boxer-6641
No problem with capacity.

Oops, got that wire from metrologist, I'm afraid.
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #44 on: April 26, 2024, 08:49:40 pm »
battleship to go after a trawler

It is 28 of .c and .h files. I wanted to put breakpoints to trace through the math which is meandering  over at least 3 of the .c
And the headers are all mixed up hard for me to follow I think due to multiple versions over 10 years.
Can all that be done in eclipse and gdb?

And Thanks for your time on this.
 

Online tatel

  • Frequent Contributor
  • **
  • Posts: 479
  • Country: es
Re: Linux Dependency Black Hole
« Reply #45 on: April 27, 2024, 11:47:30 am »
28 files?  There goes my guess about compiling just one file. I should have thought twice about it.

At this point you are going further than I have ever been. I was used to, say, manually patch realtime extensions into a kernel version that wasn't the intended one (when realtime patches were still not merged into mainstream kernel). So automatically patching the source would fail, but one could find the right places to patch manually by just looking at the code. There was usually a whole bunch of files to patch, but it was quite an easy task that called for very little programming knowledge.

I'm into system administration, not programming. However, you are about to code/debug, etc, by yourself. You'll need to learn more about programming than I currently know. So take what I'm going to write with a grain of salt. Others in this forum will know about programming much better than me, perhaps they could help from this point forward.

I guess it could be done in eclipse and gdb, yes. But to learn about how the builds work, I would:a) learn about autotools, then b) look at the source code.

I think that would be the fastest way to learn. I have never, ever, used an IDE for any build. Software was built that way, much before anyone could heard anything about Eclipse. And Eclipse (I guess) will use the autotools infrastructure anyway.

Usually just getting in the upper directory of the source and using the configure/make/make install litany, would be enough. Makefiles do the magic. They will have the calls to gcc, with the different options, etc, so you don't need to call gcc manually for each file. 

Then I'm pretty sure that, by looking at source code and Makefiles, you'll find many, many clues. You'll have to dwell there for some time, I'm afraid, but again, quite probably that will be the fastest way to get the needed skills. The knowledge so achieved will be useful no matter which distro you are using, right now or in the future, debian-based, redhat-based, slackware, whatever.

After that, I think you could easily go with that knowledge to Eclipse and have a successful, quite fast transition to work with that IDE, should you still think you need it.

You are going to embark yourself into a quite interesting experience. I wish you good luck and lots of fun.
 

Offline xvr

  • Frequent Contributor
  • **
  • Posts: 303
  • Country: ie
    • LinkedIn
Re: Linux Dependency Black Hole
« Reply #46 on: April 27, 2024, 12:02:49 pm »
It is 28 of .c and .h files. I wanted to put breakpoints to trace through the math which is meandering  over at least 3 of the .c
And the headers are all mixed up hard for me to follow I think due to multiple versions over 10 years.
Can all that be done in eclipse and gdb?
Yes, absolutely.

Eclipse is an IDE that use gdb for debugging, so you need to deal with gdb only.

First of all you need to build project. What build system is used in it? Can you see any of these files at top level?

Makefile
configure
CMakeLists.txt
 

Offline mag_thermTopic starter

  • Frequent Contributor
  • **
  • Posts: 746
  • Country: us
Re: Linux Dependency Black Hole
« Reply #47 on: April 27, 2024, 01:10:01 pm »
Thanks for the encouraging replies.

The software project I did was actually the largest  single project of my whole engineering career,
It is a numerical simulation of electro thermal process.
Took more than 5 years for me to develop, about 350 k lines of code and uses multi core parallel processing.
Trips around the world to various steel processing plants to measure and verify the accuracy.

At the start I intended multi platform and was going to use C.

Due to deployment and maintenance world wide and expected life of 20 years,
I looked at various "friendly" IDE and eventually selected Xojo.
There were to be no dependencies and the whole application was to be in user's space.
I used a pro programmers initially to do the GUIs and the sqlite.
And client's engineers assisted with scada/plc interfaces

Fairly early in the project, having trouble, we decided to abandon multi platform and just use linux.
We were fortunate that the original code and graphics continued to function over many Xojo updates from 2008 to 2019.
 But in 2019 XoJo did major changes to graphics and so we also had to do an update taking a few months

I suppose all that total experience is how I came to dislike dependencies and to rely on IDE !

And yes, I am partly doing this HackRF1 firmware job to try to keep up to date, as well as hopefully improving the FOSS firmware.
« Last Edit: April 27, 2024, 01:16:34 pm by mag_therm »
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8250
  • Country: fi
Re: Linux Dependency Black Hole
« Reply #48 on: April 27, 2024, 05:06:27 pm »
Because their QA team tested it with this version of Python. And their support teams are ready to assist with this version of Python.

Yes. Using Python in the first place is usually a mistake, although no one got fired by using Python, so I guess it's used because it's popular. The problem with Python that fanboys fail to accept but which in actual reality hits nearly every project is that each version of Python breaks compatibility both backwards and forwards (you might not always notice if the project is very simple and thus not using many of the features, but something breaks every freaking time). It's not just Python2 vs. Python3, but really a moving target. You either target one exact Python version and require user to have that exact version, or you maintain your code all day long to match the newest version, and require user to keep updating their Python to the newest release as well. Or you use something as horrible as docker to distribute your software plus the correct Python version.

PSA to all who want to develop software:
 * Do not use bullcrap like Python or docker
 * Avoid dynamic linking, link statically if possible, and therefore:
 * Avoid library dependencies like plague;
 * Only use libraries to do very complex things, like OpenGL for 3D graphics

The danger of "just using a library" to do something as simple as formatting and sending a certain packet was demonstrated on the xz backdoor disaster.
 

Offline shapirus

  • Super Contributor
  • ***
  • Posts: 1575
  • Country: ua
Re: Linux Dependency Black Hole
« Reply #49 on: April 27, 2024, 05:36:23 pm »
PSA to all who want to develop software:
 * Do not use bullcrap like Python or docker
What's wrong with docker? It actually provides a way to create an abstraction layer to decouple the software in question from the host OS, thus helping to solve dependency issues, not to mention runtime context isolation.

* Avoid dynamic linking, link statically if possible, and therefore:
 * Avoid library dependencies like plague;
How does the latter follow from the former? If you link statically, you don't care about dependencies.

By using libraries, you avoid reinventing the wheel and avoid making dangerous mistakes. That's what the libraries exist for in the first place. Yes, there are potential security risks, like the xz case. Such cases are rare. By creating your own implementations of what would otherwise be provided by libraries you waste resources and create potential vulnerabilities on your own.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8250
  • Country: fi
Re: Linux Dependency Black Hole
« Reply #50 on: April 27, 2024, 06:21:16 pm »
What's wrong with docker? It actually provides a way to create an abstraction layer to decouple the software in question from the host OS, thus helping to solve dependency issues

It is "solving" an unnecessary problem that is created by poor practices. I consider it as a bandaid, or hack, or lipstick on a pig. Even if you end up using it, I strongly suggest designing your software such that it does not depend on docker. If docker is making things slightly easier for some (the new docker generation), then it's OK. If your software is nightmare to use without docker, then it's a good indicator of failure.

* Avoid dynamic linking, link statically if possible, and therefore:
 * Avoid library dependencies like plague;
How does the latter follow from the former? If you link statically, you don't care about dependencies.

Even with static linking, unneceassary dependencies increase the size of your binary, and makes your own development work more difficult to you or anyone else who tries to build the project from sources. So what I really meant: avoid excessive library use regardless of static vs. dynamic linking.

Quote
By using libraries, you avoid reinventing the wheel and avoid making dangerous mistakes.

In the perfect world, yes, and this is correct use of libraries. It is very typical however to solve relatively simple problems with libraries which do not exactly solve that problem, and end up doing same amount of work anyway, in which case you have both your own mistakes plus any hidden within the library.

A simplified example not too far from reality is using a DSP library to perform a convolution. Meanwhile, a convolution is only a few lines of code, looping through the data, doing multiplications and additions. If you do it though a library, you are:
* Creating a dependency,
* Due to poor API, probably creating copies of data,
* Due to poor API, probably supplying pointers and data counts, increasing risk of serious memory access bugs
* Possibly decreasing performance

For truly complex problems which are a lot of work to "reinvent", and which can be abstracted to simple and intuitive APIs, this isn't a problem at all.

You seem to have good trust in quality of libraries. I don't. Therefore I suggest use only those you actually need. Quality over quantity.
« Last Edit: April 27, 2024, 06:28:26 pm by Siwastaja »
 
The following users thanked this post: JPortici


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf