General > General Technical Chat

tio - A simple serial device I/O tool for linux

<< < (9/15) > >>

janoc:

--- Quote from: lundmar on October 01, 2017, 09:14:32 pm ---Sure, I know how Python works but for tools like this there is simply no tolerance for binary installers - it must be compilable from source across various different platforms and this is where you hit all the well known issues with the variation of various Python installations. These issues becomes even worse in a cross compilation scenario. Sure, things has become more stable with Python 3.x but it is still a bit of a mess.

--- End quote ---

In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.

And for Windows and Mac? You have to be kidding. If you don't provide prebuilt binaries, nobody will bother because building software on Windows is such a pain, especially if there are other utilities fulfilling the same function available. You claim to want to be cross-platform compatible - and then you use the autoconf build system which is notoriously difficult to make work in Windows and pain in the butt on Mac (now, your code may not even work in Windows at all due to the API differences, but that's another subject). So your argument kinda flies out of the window there unless by "cross-platform" you mean the Microsoft kind of cross-platform - "it is cross-platform as long as the platform is Windows", only for Linux distributions instead.


--- Quote from: lundmar on October 01, 2017, 09:14:32 pm ---Of course CPU overhead is not an issue in a terminal tool but from a general perspective pure Python will always be much slower than what can be done in C/C++, regardless of the Python code being scripted or compiled. Sure, Python can make do for some real time performance applications if your computer is fast enough. However, if you care about performance and need to make maximum use of your available CPU cycles and memory resources then Python is simply not a good option.

--- End quote ---

That argument can be trivially debunked too. What you don't realize is that in most cases Python acts only as a sort-of "glue", putting together calls to native C/C++/Fortran/GPU/etc. libraries. Of course, if your benchmark is how fast I can run a loop printing numbers, C is likely going to be faster. But that is not a realistic benchmark at all because that's not how Python is commonly used.

For example, a well written Numpy numerical simulation code will run circles about a naive C/C++ implementation, because Numpy does things like code vectorization for you and has tons of heavily optimized algorithms behind the scenes. And I am not even talking about libraries like Numba, that generate optimized code using the llvm backend. You can see for yourself in this article:

https://www.ibm.com/developerworks/community/blogs/jfp/entry/A_Comparison_Of_C_Julia_Python_Numba_Cython_Scipy_and_BLAS_on_LU_Factorization?lang=en


--- Quote from: lundmar on October 01, 2017, 09:14:32 pm ---And you are right, if you write proper error handling in Python then you won't see the nasty standard exception messages with long winded call stacks. Unfortunately, many Python script kiddies don't do that.

--- End quote ---

True. However, we are talking about professional developers, not script kiddies? A "kiddy" developer will be hard pressed to do proper error handling in any language.


--- Quote from: lundmar on October 01, 2017, 09:14:32 pm ---And then there is the whole discussion on whether one thinks Python is a beautiful script language or not - personally I can't stand how it uses indentation to delimit code blocks but that is very much a personal opinion.

--- End quote ---

That's pretty much a subjective issue. It used to bug me too but one gets used to it quickly because the indentation falls exactly where you would indent code in C/C++ as well, so you don't think about it after a while. Just don't mix spaces and tabs in one file, that's a pain in the butt.

lundmar:

--- Quote from: janoc on October 02, 2017, 09:00:27 am ---In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.

--- End quote ---

Well, I understand you don't care much about the packager and end user experience. I'm not going to subject my package maintainers to the pains of maintaining any Python dependencies for my software. And end users having to pip install anything is causing them unnecessary pains not the mention the problems of which major python version they need to pick. I simply will not go down that road.


--- Quote from: janoc on October 02, 2017, 09:00:27 am ---And for Windows and Mac? You have to be kidding. If you don't provide prebuilt binaries, nobody will bother because building software on Windows is such a pain, especially if there are other utilities fulfilling the same function available. You claim to want to be cross-platform compatible - and then you use the autoconf build system which is notoriously difficult to make work in Windows and pain in the butt on Mac (now, your code may not even work in Windows at all due to the API differences, but that's another subject). So your argument kinda flies out of the window there unless by "cross-platform" you mean the Microsoft kind of cross-platform - "it is cross-platform as long as the platform is Windows", only for Linux distributions instead.

--- End quote ---

Yes, my focus is mostly GNU/Linux systems and when I said cross platform I mean for example Linux vs BSD vs Hurd vs odd GNU distributions etc.. However, adding Windows to the mix does not make it easier to distribute applications with dependency on Python. In Windows you don't have package managers the same way we have in Linux so users end up having to sort out dependencies themselves and make sure to install the correct major version of Python to make things work and after that install any missing Python modules whatever they might be. Normal Windows users can run into all sorts of trouble with these things and some do. You can dismiss it if you like but that is reality for you.

I would never choose such solution for a professional application but that is my opinion.


--- Quote from: janoc on October 02, 2017, 09:00:27 am ---That argument can be trivially debunked too. What you don't realize is that in most cases Python acts only as a sort-of "glue", putting together calls to native C/C++/Fortran/GPU/etc. libraries. Of course, if your benchmark is how fast I can run a loop printing numbers, C is likely going to be faster. But that is not a realistic benchmark at all because that's not how Python is commonly used.

For example, a well written Numpy numerical simulation code will run circles about a naive C/C++ implementation, because Numpy does things like code vectorization for you and has tons of heavily optimized algorithms behind the scenes. And I am not even talking about libraries like Numba, that generate optimized code using the llvm backend. You can see for yourself in this article:

https://www.ibm.com/developerworks/community/blogs/jfp/entry/A_Comparison_Of_C_Julia_Python_Numba_Cython_Scipy_and_BLAS_on_LU_Factorization?lang=en

--- End quote ---

Please notice that I said that _pure_ Python will always be much slower than C/C++. Yes, Python can make use of all sorts of precompiled support libraries/modules and then become faster that way but that, in my opinion, goes against the point of using a scripting language for writing an application. Then I would much rather write it all in C/C++ and not bother with any of the downsides of Python. Also, with Python binary modules you can run into the problem of them not being precompiled and available on various platforms and then you will have to compile and install them yourself. This of course does not apply to jit Python modules but here you pay a performance cost at runtime instead.

One specific Numpy numerical calculation performance test is not very convincing and the use case for Numpy is very limited to scientific computing. There are numerous benchmarks available online that show how C/C++ is generally a factor or more faster in most scenarios. Even with clever jit technologies like Numba it is, in general, nowhere near as fast as C/C++. Sure, in some very specific benchmarks it can get close but in general use no. In the end Python/cython/numba/numpy etc. can't beat the fact that in C/C++ you have full control of cache lines and memory layout and with these mechanism you can achieve the best possible performance.

I'm the kind of developer that cares about performance in any context and I'm not going to compromise my performance by writing my applications in Python and then have to depend on various Python modules to minimize the performance gap. That is my opinion, and in professional context I feel even stronger about this.


--- Quote from: janoc on October 02, 2017, 09:00:27 am ---True. However, we are talking about professional developers, not script kiddies? A "kiddy" developer will be hard pressed to do proper error handling in any language.

--- End quote ---

Yet, many Python applications you find still crash with the default Python type stack trace - it's not pretty.


--- Quote from: janoc on October 02, 2017, 09:00:27 am ---That's pretty much a subjective issue. It used to bug me too but one gets used to it quickly because the indentation falls exactly where you would indent code in C/C++ as well, so you don't think about it after a while. Just don't mix spaces and tabs in one file, that's a pain in the butt.

--- End quote ---

Yes, very subjective. However, I really care about syntax and I find the lack of braces a poor language design choice. Also, never use tabs, only spaces ;)

Anyway, I think we simply have to agree to disagree on this topic. This thread is supposed to be all about tio ;)

janoc:

--- Quote from: lundmar on October 02, 2017, 01:05:53 pm ---
--- Quote from: janoc on October 02, 2017, 09:00:27 am ---In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.

--- End quote ---

Well, I understand you don't care much about the packager and end user experience. I'm not going to subject my package maintainers to the pains of maintaining any Python dependencies for my software. And end users having to pip install anything is causing them unnecessary pains not the mention the problems of which major python version they need to pick. I simply will not go down that road.


--- End quote ---

I am not asking you to - re-writing your terminal emulator in Python just for the sake of it would be obviously silly.

I am not sure how did you determine that "I don't care about the packager or end user experience". Python packages and software are routinely packaged for distros with no issues. If the package has proper build using distutils, it is no problem whatsoever. E.g. my Mageia 6 has over 1000 Python packages listed in the repository, including major software like GnuRadio, the entire Numpy/Scipy stack, etc. Packaging Python applications is no different from packaging C/C++ - you have to handle dependencies there as well.

And end-users are not supposed to build from source (even if it is not difficult at all), I think I have been rather explicit about it.


--- Quote from: lundmar on October 02, 2017, 01:05:53 pm ---Yes, my focus is mostly GNU/Linux systems and when I said cross platform I mean for example Linux vs BSD vs Hurd vs odd GNU distributions etc.. However, adding Windows to the mix does not make it easier to distribute applications with dependency on Python. In Windows you don't have package managers the same way we have in Linux so users end up having to sort out dependencies themselves and make sure to install the correct major version of Python to make things work and after that install any missing Python modules whatever they might be. Normal Windows users can run into all sorts of trouble with these things and some do. You can dismiss it if you like but that is reality for you.

I would never choose such solution for a professional application but that is my opinion.

--- End quote ---

Whether I like it or not is beside the point (btw, I am a Linux user since 1994). If you deal with software for business, you will have to deal with Windows too, that's just a fact of life. I had to deal with various Unix systems over time, but yet have to see anyone using a Hurd. So it is cool you are thinking about portability to it (seriously, your Unix/Linux TTY apis are going to work on Hurd? Ehm ...)

Just this week I had to actually prepare some software for a Mac, even. Some clients use even that.

Re package management - pip & anaconda work just fine in Windows. Anyhow, you are missing the point again - you don't distribute your application as just source code for Windows but build a binary so that the user doesn't need to build it themselves. Yes, that is possible with Python. Then you have a normal, self-contained exe file.

I am not dismissing anything, my point is that you are making a portability argument - and use an ancient build system that is pretty much making cross-platform (porting from one Linux to another Linux is not really cross-platform in my book) portability impossible. If you were using something like CMake it would be more believable.


--- Quote from: janoc on October 02, 2017, 09:00:27 am ---Please notice that I said that _pure_ Python will always be much slower than C/C++. Yes, Python can make use of all sorts of precompiled support libraries/modules and then become faster that way but that, in my opinion, goes against the point of using a scripting language for writing an application.

--- End quote ---

What? That's a bit like saying using the standard C library goes against the point of using a C language for writing an application ... I didn't know that only writing everything from scratch is the acceptable form.


--- Quote from: lundmar on October 02, 2017, 01:05:53 pm ---One specific Numpy numerical calculation performance test is not very convincing and the use case for Numpy is very limited to scientific computing.

--- End quote ---

Right. So go tell that to folks like Google or Facebook that are building most of the deep-learning stuff using this. Or to financial analysts building investment plans using tools like Numpy & Pandas, etc. Or people doing any sort of data mining (practically everyone today - Numpy is free and much faster than Matlab which used to be the tool of choice).


--- Quote from: lundmar on October 02, 2017, 01:05:53 pm ---There are numerous benchmarks available online that show how C/C++ is generally a factor or more faster in most scenarios. Even with clever jit technologies like Numba it is, in general, nowhere near as fast as C/C++. Sure, in some very specific benchmarks it can get close but in general use no. In the end Python/cython/numba/numpy etc. can't beat the fact that in C/C++ you have full control of cache lines and memory layout and with these mechanism you can achieve the best possible performance.

I'm the kind of developer that cares about performance in any context and I'm not going to compromise my performance by writing my applications in Python and then have to depend on various Python modules to minimize the performance gap. That is my opinion, and in professional context I feel even stronger about this.

--- End quote ---

The problem is that what you are saying is relevant if you are writing close to the metal application (btw, I do wonder how you perform a "full control of cache lines" - you can at best ask for continuous memory allocation and certain alignment). None of this is at all relevant for an application which is spending 99.9% of its time waiting for a character to appear on a file descriptor - such as a terminal emulator. Or pretty much any application that has an UI.

What matters a lot, though, is developer's productivity, because it is directly related to how costly (or not) is the project for the company. The entire time is money etc. thing. Once your job becomes writing complex algorithms that involve a lot of math, networking or anything else not covered by the standard C/C++ libraries you will start to appreciate stuff like a good set of libraries and an expressive programming language (be it Python, Julia, Haskell, C# or whatever else).

Why do you think Java and C# became so popular? They have large runtimes, Java is terribly verbose and both are horrible for anything system level. However, both have enormous libraries of code available that make handling common tasks a breeze in them. Try to do e.g. any sensible (aka complex) networking in C/C++ without e.g. Boost or something like 0MQ or ACE and then do the same in C# only using standard libraries and you will see what I am talking about.

I have nothing against C or C++ and I still write tons of code in them but treating them as a sort of holy grail that nothing ever comes close to is totally counterproductive. I could spend a week or two writing an embedded web server in C++ for an application. Or I can write 5 lines of Python and be done in 30 minutes and move on to solving actual problems that the client actually pays me money for.

If I have learned something over my career, it was not becoming an expert in a programming language. A much more important skill is using the right tool for the job and keeping eyes open to new things instead of being stuck on incorrect assumptions because you have seen something in the past and didn't like it.


--- Quote from: lundmar on October 02, 2017, 01:05:53 pm ---Yet, many Python applications you find still crash with the default Python type stack trace - it's not pretty.

--- End quote ---

Yes and a lot of C/C++ applications crash with the segmentation fault error or a bus error. That is even less pretty because nobody has any clue why. But perhaps it is more acceptable because users are used to applications just closing on them? That's really a silly point to make, IMO.

lundmar:

--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---I am not asking you to - re-writing your terminal emulator in Python just for the sake of it would be obviously silly.

--- End quote ---

Of course not - that would indeed be very silly.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---I am not sure how did you determine that "I don't care about the packager or end user experience". Python packages and software are routinely packaged for distros with no issues. If the package has proper build using distutils, it is no problem whatsoever. E.g. my Mageia 6 has over 1000 Python packages listed in the repository, including major software like GnuRadio, the entire Numpy/Scipy stack, etc. Packaging Python applications is no different from packaging C/C++ - you have to handle dependencies there as well.

And end-users are not supposed to build from source (even if it is not difficult at all), I think I have been rather explicit about it.

--- End quote ---

I'm not saying end users should build from source. I meant for package maintainers it is important and they will have to deal with missing or out of date dependencies. From a package maintainers point of view, having lived through the painful years of the migration from Python 2.x to 3.x, there have been many distribution and stability issues. But things hare clearly stabilized today with Python 3.x and that is good - I'm just still not a fan.

Of course there are no issues for end users if distributed well. On Linux there are rarely issues if users make use of their systems native package manager as it will take care of all dependencies and these dependencies will be shared among applications so you can keep your package smallest possible in size. However, on Windows you will have to bundle all your dependencies or rely on eg. setuptools to download your python dependencies to make the end user experience a good one and even then you have to make the decision to bundle/download a full Python installation or rely on the end user installing the correct up-to-date version of Python/Anaconda (2.x vs 3.x). That's fine, it's just not the stuff I prefer to deal with when distributing applications.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---Whether I like it or not is beside the point (btw, I am a Linux user since 1994). If you deal with software for business, you will have to deal with Windows too, that's just a fact of life. I had to deal with various Unix systems over time, but yet have to see anyone using a Hurd. So it is cool you are thinking about portability to it (seriously, your Unix/Linux TTY apis are going to work on Hurd? Ehm ...)

--- End quote ---
Great, and I've been a Linux hacker since 1995 and have been working with embedded Linux professionally for many years but that is irrelevant for this discussion. The point is cross-platform can be more than just Windows vs. Linux vs Mac.. And yes, eventually I plan to make tio also work with GNU Hurd. Hurd is an interesting OS and good work is being put into it by the GNU people and others. It's microkernel design is superior to Linux in important ways but that is an entirely different discussion we don't need to engage here.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---Re package management - pip & anaconda work just fine in Windows. Anyhow, you are missing the point again - you don't distribute your application as just source code for Windows but build a binary so that the user doesn't need to build it themselves. Yes, that is possible with Python. Then you have a normal, self-contained exe file.

--- End quote ---

You keep misrepresenting what I'm saying. Of course you don't distribute source code to end users, especially not on Windows. And yes, for Windows, I certainly prefer to distribute applications as a self-contained exe with no external dependencies. Of course I know, you can do that for Python applications too but as I mentioned above you will have to decide whether the end user will have to bother with installing Python 2.x or 3.x as a separate installation or if you will bundle a potentially duplicate installed Python which will increase installation size or you will have to compile all your python into executable so that it does not depend on Python at runtime. I'm simply not a fan of any of these mechanisms.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---What? That's a bit like saying using the standard C library goes against the point of using a C language for writing an application ... I didn't know that only writing everything from scratch is the acceptable form.

--- End quote ---

No, you are missing my point. What I'm saying is that if I'm going to write a professional self-contained application with performance in mind then my preferred choice is to write it in eg. C/C++ from the get go and reap the performance gains immediately across my entire application. And of course never write everything from scratch. There are so many high quality and well abstracted C/C++ libraries available out there that even doing complex stuff has become trivial. Sure, it takes a little longer to write a C/C++ application but not as much as people think. This is my professional preference. Of course, if I'm putting something together quickly and performance is not first priority then yes I use Python or whatever other higher level language that fits the job.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---Right. So go tell that to folks like Google or Facebook that are building most of the deep-learning stuff using this. Or to financial analysts building investment plans using tools like Numpy & Pandas, etc. Or people doing any sort of data mining (practically everyone today - Numpy is free and much faster than Matlab which used to be the tool of choice).

--- End quote ---

And I think that is great - it's an improvement. It used to be that scientists preferred using good ol' antiquated Matlab in combination with optimized computational libraries typically written in C/C++ to gain better performance to this way avoid waiting weeks for their computations. Now, instead, they get to use Python, Lua etc. in combination with optimized computational libraries typically written in C/C++ to gain better performance. I mean, it is no coincidence that the core of Numpy is written in C.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---The problem is that what you are saying is relevant if you are writing close to the metal application (btw, I do wonder how you perform a "full control of cache lines" - you can at best ask for continuous memory allocation and certain alignment). None of this is at all relevant for an application which is spending 99.9% of its time waiting for a character to appear on a file descriptor - such as a terminal emulator. Or pretty much any application that has an UI.

--- End quote ---

Not full control in the literal sense of course, but as good as it gets. There are many more tricks you can do with the C/C++ compiler, linker, and language constructs to make sure your most performance critical code can align and fit within the cache lines of your CPU and to maximize the chance it will be picked up by the cache mechanisms. Even moving code blocks around in the memory layout to trick the cache prefetcher includes surprising gains. It's almost an art form. This is one of the reasons optimized libraries are often written in C/C++.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---What matters a lot, though, is developer's productivity, because it is directly related to how costly (or not) is the project for the company. The entire time is money etc. thing. Once your job becomes writing complex algorithms that involve a lot of math, networking or anything else not covered by the standard C/C++ libraries you will start to appreciate stuff like a good set of libraries and an expressive programming language (be it Python, Julia, Haskell, C# or whatever else).

Why do you think Java and C# became so popular? They have large runtimes, Java is terribly verbose and both are horrible for anything system level. However, both have enormous libraries of code available that make handling common tasks a breeze in them. Try to do e.g. any sensible (aka complex) networking in C/C++ without e.g. Boost or something like 0MQ or ACE and then do the same in C# only using standard libraries and you will see what I am talking about.

--- End quote ---

I'm not saying C/C++ is the one and only way. I think you should pick the right tool for the right job and I think we agree on that point. However, in case of C++ there are so many good libraries and UI toolkits available that helps a lot to speed up development to the point where you can justify its use.

In C++ it does not really matter that much if you use Boost or standard libraries. Fun fact, stuff conceived in Boost often end up in the standard libraries.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---I have nothing against C or C++ and I still write tons of code in them but treating them as a sort of holy grail that nothing ever comes close to is totally counterproductive. I could spend a week or two writing an embedded web server in C++ for an application. Or I can write 5 lines of Python and be done in 30 minutes and move on to solving actual problems that the client actually pays me money for.

If I have learned something over my career, it was not becoming an expert in a programming language. A much more important skill is using the right tool for the job and keeping eyes open to new things instead of being stuck on incorrect assumptions because you have seen something in the past and didn't like it.

--- End quote ---

Again we agree, the right tool for the right job. However, I will say that becoming an expert in a specific language does carry a lot of merit for many jobs but I think it is more important to simply be an expert programmer that is familiar with various programming paradigms and techniques because that makes it possible to quickly adapt and use new languages fitting the job at hand.


--- Quote from: janoc on October 02, 2017, 08:12:57 pm ---Yes and a lot of C/C++ applications crash with the segmentation fault error or a bus error. That is even less pretty because nobody has any clue why. But perhaps it is more acceptable because users are used to applications just closing on them? That's really a silly point to make, IMO.

--- End quote ---

It does require some level of discipline to introduce sensible error handling in either language. I just wish Python wasn't so noisy in its default verbosity.

HoracioDos:
Hi.
Until now I was used to cu and screen. Now I'm trying Tio. For Gui: CuteCom, Moserial and CoolTerm work fine. For terminal: Tilix

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod