Author Topic: thanks GNU!  (Read 7009 times)

0 Members and 1 Guest are viewing this topic.

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: thanks GNU!
« Reply #25 on: August 23, 2017, 08:07:17 pm »
Thats something that someone who hasn't contributed to an open source piece of software says. The reality is it's a nightmare. I've made hundreds of pull requests, patches via email over the years and the accept rate is around 5%. Most are ignored, deleted or closed silently or result in a flame war because some little big man's ego has been damaged by the idea that someone wants to contribute. So I gave up, maintain patch forks of some things we use.
I had the same experience contributing to Wikipedia. Naively, I thought it'd be cool to contribute to something I've found so very useful. So I rewrote an article to be more unbiased, clarified a few things and removed other ambiguous things without sources. Within a day, the whole thing was fully reverted by the original author. I've tried rewriting my contribution a couple of times in various ways, but most of it was trashed within hours.

It seems authors appropriate certain articles and will not accept any contribution or change to their work. This guy was also part of a group that considers itself guardian of the subject, which may contribute to a sense of entitlement and a lack of acceptance of any other contributions. That's been the first and last time I ever tried contributing anything to Wikipedia, other than fixing a few typos.

Sadly, it seems open source also has this problem. Ideas and project are split into myriad fractions, each of which compete but often do the same thing. Most of them do not have enough momentum to produce well rounded products, leading to a huge mass of not very amazing software and hardware products or, if there is some momentum, a dazzling array of alternatives with little differences between them. Long term support also is often very shaky. Ego seems to be a large part of that. Rather than working in the service of another, people would rather fork and do something themselves, even though the end product suffers. Disagreements also often seem to be 'resolved' by forking, rather than working things out together. That's a bit like taking the ball and going home, except that the ball gets copied. Due to the many competing opinions and no one to sign off on decisions, projects often are a jumbled mess of ideas and strategies.

Don't get me wrong, I really appreciate the concept and philosophy of the open source community and am happily using various products that have resulted from it, but there are a few serious issues that cause the whole effort to be much less fruitful than it could be.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5231
  • Country: us
Re: thanks GNU!
« Reply #26 on: September 03, 2017, 12:57:13 am »
What?  There are software people and coders with poor people skills?  Who knew?

And you mean they gravitate to tasks which minimize their need to expose those lacks?  How strange. :-DD
 

Offline aandrew

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: ca
Re: thanks GNU!
« Reply #27 on: September 04, 2017, 01:24:44 am »
Use virtualized environment and do not update the tools.

Absolutely, 100% this. This is exactly how I do my dev environments. The disk files will work for as long as you can boot the VM, and they won't ever change unless you need them to.

I don't know why more people don't do this. It's the only sane approach.
 

Offline aandrew

  • Frequent Contributor
  • **
  • Posts: 277
  • Country: ca
Re: thanks GNU!
« Reply #28 on: September 04, 2017, 01:32:35 am »
Not all exploits result in compiled applications being exploitable but it doesn't mean you can be ignorant by thinking that just because you've isolated your build machine that your apps you've compiled using them are not.

While technically true, this is *such* a remote case that you're far, far, far more likely to be bitten by your own poor software development before a compiler exploit makes you look bad. Worry about the most common causes of software vulnerability/exploitability before worrying that an old compiler version is making your software vulnerable.
 

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 5319
  • Country: gb
Re: thanks GNU!
« Reply #29 on: September 04, 2017, 03:47:57 pm »
Use virtualized environment and do not update the tools.

Absolutely, 100% this. This is exactly how I do my dev environments. The disk files will work for as long as you can boot the VM, and they won't ever change unless you need them to.

I don't know why more people don't do this. It's the only sane approach.

Indeed, that has been my approach for some time now. Without it, you unnecessarily end up down long rabbit holes blaming your continually changing environment. It brings stability to your build environment, and when you come to maintain something months or years later, you don't have to spend days of your life rebuilding a working toolchain, because half the stuff you used is now deprecated.

Much of the time there is no reason for it to be connected to the Internet, or any network for that matter. And as you've no doubt backed the entire VM up, you have a crisp known working setup to go back to should anything go badly wrong.

I'd say your average internet connected PC is at significantly greater risk compared to a sandboxed VM.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23018
  • Country: gb
Re: thanks GNU!
« Reply #30 on: September 04, 2017, 05:17:32 pm »
Just to add, build your build environments with ansible and vagrant. That means you have the recipe stored as well. If you need a specific package upgraded or to test something it makes reproducing that environment trivial.

we cd into a directory, type vagrant up and ten minutes later you get a fresh VM sitting there with all tools deployed.
 

Offline macboy

  • Super Contributor
  • ***
  • Posts: 2254
  • Country: ca
Re: thanks GNU!
« Reply #31 on: September 06, 2017, 03:29:33 pm »
Just to add, build your build environments with ansible and vagrant. That means you have the recipe stored as well. If you need a specific package upgraded or to test something it makes reproducing that environment trivial.

we cd into a directory, type vagrant up and ten minutes later you get a fresh VM sitting there with all tools deployed.
We do something similar, but using simple chroot. It's not a full blown VM, but you do get the same, fresh filesystem with all the build tools set up exactly as intended, with no leftover garbage from last time.
 

Offline Mr. Scram

  • Super Contributor
  • ***
  • Posts: 9810
  • Country: 00
  • Display aficionado
Re: thanks GNU!
« Reply #32 on: September 06, 2017, 08:28:31 pm »
Just to add, build your build environments with ansible and vagrant. That means you have the recipe stored as well. If you need a specific package upgraded or to test something it makes reproducing that environment trivial.

we cd into a directory, type vagrant up and ten minutes later you get a fresh VM sitting there with all tools deployed.
Can you tell a little more about Ansible and Vagrant? They have a very well looking and polished website that manages to convey suprisingly little about what their product is and does.
 

Offline bd139

  • Super Contributor
  • ***
  • Posts: 23018
  • Country: gb
Re: thanks GNU!
« Reply #33 on: September 06, 2017, 08:40:49 pm »
Vagrant is a tool that allows you to build virtual machine images using a single definition file. You point it at a system image and it will go fetch it, set up virtualbox, set up all the networking and everything, deploy all your software and off you go. Example:

Code: [Select]
vagrant init centos/7
vagrant up 
# wait a few minutes the first time as it downloads and caches the image
vagrant ssh

And you're in a VM! See: https://www.vagrantup.com/intro/getting-started/ for details. You can store the Vagrantfile definition in source control and share that between team members if you need to afterwards as well. It will set up synced folders etc as well.

Ansible happens as part of the above. Ansible is "desired state configuration" so you can say "i want these packages", "i want this file to be here", "i want this to happen", "i want these ports open on the firewall" and it will take whatever the state of the system is even if you've fudged it around and make it happen. These definitions are stored in files as well, in source control. This means you can version your entire toolchain, environment, everything!

Just git checkout the tag, vagrant up and your toolchain is online. That includes the toolchain you had 7 years ago if you want.

Your entire toolchain becomes a few kb of configuration in a source repository. No storing gigabytes of crap for years.

Ansible intro.

http://docs.ansible.com/ansible/latest/intro.html
http://docs.ansible.com/ansible/latest/playbooks.html

Ansible is pretty amazing. You can use it to talk to hundreds of machines at once and make them all dance the same dance or just use it to set up a single one.

Here's a list of what it can do automatically: http://docs.ansible.com/ansible/latest/modules_by_category.html

As an example, if you don't want to do local dev or need a hefty build machine for a big source project, you can point it at Amazon EC2, fire up a massive instance using EXACTLY the same configuration you have in your local image, compile everything, deliver the compiler output, then shut it down and send you an email.
« Last Edit: September 06, 2017, 08:43:31 pm by bd139 »
 
The following users thanked this post: Mr. Scram

Offline savril

  • Regular Contributor
  • *
  • Posts: 66
  • Country: fr
Re: thanks GNU!
« Reply #34 on: September 08, 2017, 11:18:29 am »
Another option for "virtual" environment is Docker. It use Linux cgroup abilities to provide an environment with a separate set of libraries while running the process in the same space as your main Linux (but with a level of isolation). The virtual environment is called a container. Containers have the advantage of being much faster to build and launch while it has less isolation with your main system.

It can also run with Mac and Windows, it run in a VM but provide roughly the same functions. You can for example run a shell command in the container but launched in your main system command shell.
For example you can run docker exec mycontainerid "echo hello world" and it will run the echo in the container while it will output the result in your Windows shell. With docker -i exec mycontainerid bash you can have an interactive shell running in the container while you enter your commands in your Windows shell.
You can also access your local filesystem within the container.

You can build the images with a simple Dockerfile.
Code: [Select]
FROM ubuntu:zesty

ARG DEBIAN_FRONTEND=noninteractive

RUN apt-get update && apt-get -y install \
    gcc-arm-none-eabi \
    build-essential \
    cmake \
    git

Docker tend to replace Vagrant in the IT world. For build and continuous integration, the major advantage is the faster time to launch the virtual environment. A container launch in less than a second. So you don't have to keep a VM started. And it consume less memory too. In a VM you have to get the whole system running with its necessary utilities (for example cron, ...). In a container, you just launch the programs you need.
However, if you run docker in Mac or Windows, keep in mind that you'll have a VM running with a strip down version of Linux (1G of RAM seem a reasonable minimum for the VM).
 

Offline legacyTopic starter

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: thanks GNU!
« Reply #35 on: September 08, 2017, 11:42:38 am »
Vagrant is a tool that allows you to build virtual machine images using a single definition file

oh, it looks gentoo-ish Catalyst :D
But thanks, it's very useful, and simpler!
And thanks for your example!
 

Offline legacyTopic starter

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: thanks GNU!
« Reply #36 on: September 08, 2017, 11:43:56 am »
Another option for "virtual" environment is Docker. It use Linux cgroup abilities to provide an environment with a separate set of libraries while running the process in the same space as your main Linux (but with a level of isolation).

WOW. It's a brilliant idea :D
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf