Author Topic: tio - A simple serial device I/O tool for linux  (Read 17877 times)

0 Members and 1 Guest are viewing this topic.

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #25 on: October 01, 2017, 01:11:28 pm »
@janoc , Btw. there is a new tio release which includes the fixed configure script ;)

https://github.com/tio/tio/releases
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2217
  • Country: 00
Re: tio - A simple TTY terminal I/O application for linux
« Reply #26 on: October 01, 2017, 01:46:19 pm »
Code: [Select]
guv@cetriolo:/data/karel/software/tio> git clone [url]https://github.com/tio/tio.git[/url]
Cloning into 'tio'...
remote: Counting objects: 1143, done.
remote: Compressing objects: 100% (73/73), done.
remote: Total 1143 (delta 50), reused 62 (delta 22), pack-reused 1047
Receiving objects: 100% (1143/1143), 200.50 KiB | 0 bytes/s, done.
Resolving deltas: 100% (686/686), done.
guv@cetriolo:/data/karel/software/tio> cd tio/
guv@cetriolo:/data/karel/software/tio/tio> ./configure
bash: ./configure: No such file or directory
guv@cetriolo:/data/karel/software/tio/tio> autoconf
configure.ac:4: error: possibly undefined macro: AM_INIT_AUTOMAKE
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
configure.ac:5: error: possibly undefined macro: AM_SILENT_RULES
configure.ac:25: error: possibly undefined macro: AM_CONDITIONAL
guv@cetriolo:/data/karel/software/tio/tio> make
make: *** No targets specified and no makefile found.  Stop.
guv@cetriolo:/data/karel/software/tio/tio> uname -a
Linux cetriolo.suse 4.4.87-18.29-default #1 SMP Wed Sep 13 07:07:43 UTC 2017 (3e35b20) x86_64 x86_64 x86_64 GNU/Linux
guv@cetriolo:/data/karel/software/tio/tio> cat /etc/os-release
NAME="openSUSE Leap"
VERSION="42.2"
ID=opensuse
ID_LIKE="suse"
VERSION_ID="42.2"
PRETTY_NAME="openSUSE Leap 42.2"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:leap:42.2"
BUG_REPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"
guv@cetriolo:/data/karel/software/tio/tio>
« Last Edit: October 01, 2017, 01:47:55 pm by Karel »
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #27 on: October 01, 2017, 02:00:49 pm »
Code: [Select]
guv@cetriolo:/data/karel/software/tio> git clone [url]https://github.com/tio/tio.git[/url]
Cloning into 'tio'...
remote: Counting objects: 1143, done.
remote: Compressing objects: 100% (73/73), done.
remote: Total 1143 (delta 50), reused 62 (delta 22), pack-reused 1047
Receiving objects: 100% (1143/1143), 200.50 KiB | 0 bytes/s, done.
Resolving deltas: 100% (686/686), done.
guv@cetriolo:/data/karel/software/tio> cd tio/
guv@cetriolo:/data/karel/software/tio/tio> ./configure
bash: ./configure: No such file or directory

May I suggest you download and use the official release tarball: https://github.com/tio/tio/releases/download/v1.24/tio-1.24.tar.xz

If you do insist on using the git source tree you will have to run the more or less conventional ./autogen.sh script first which will generate the configure script etc..
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: tio - A simple TTY terminal I/O application for linux
« Reply #28 on: October 01, 2017, 02:27:17 pm »
@janoc , Btw. there is a new tio release which includes the fixed configure script ;)

https://github.com/tio/tio/releases

Works a treat now, thanks!  :-+
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2217
  • Country: 00
Re: tio - A simple TTY terminal I/O application for linux
« Reply #29 on: October 01, 2017, 02:28:59 pm »
May I suggest you download and use the official release tarball: https://github.com/tio/tio/releases/download/v1.24/tio-1.24.tar.xz

If you do insist on using the git source tree you will have to run the more or less conventional ./autogen.sh script first which will generate the configure script etc..

I wasn't aware of that. Using the tarbal it compiles and runs fine on openSuse. Thanks!  :-+
 

Offline mib

  • Contributor
  • Posts: 14
Re: tio - A simple TTY terminal I/O application for linux
« Reply #30 on: October 01, 2017, 03:37:28 pm »
A few suggestions:

If you get a failure opening the tty do some simple checks and give a more useful error message. Check for idiots logged in as root, and give a suitable error - root isn't allowed to open serial ports by default for obvious (although now largely historical) reasons.
Also check if the user is a member of the correct group to access the tty and provide a helpful message if not. You should really read the group for the tty file etc, etc. Those are two serial port access error I get asked about frequently.

I think at best I can add a warning when you run as root because I really don't want to dictate policy. If you want to run as root I won't prevent you. We also have to keep in mind that tio is actually being used on embedded devices with very simple filesystems where root is still the normal user state. Regarding tty groups, I don't want to walk down the road of maintaining checks for specific groups since those can vary across distributions and over time.

I'm really only asking for better error messages on failure to open the serial port. When you get EACCESS, check the uid for root and report that. Checking the group is a little harder, you'd have to look at the group of the tty device you're trying to open. But it's not that hard. Certainly don't do a hard coded group name.

Something like:
"Failed to open tty: did you mean to run this as root"
or
"Failed to open tty: you need to be in the 'dialout' group"

Rather than just the unhelpful "Permission denied" error. Which people usually respond to by running the same command again as root, and get the same error (for a different reason). Then I have to explain yet again that yes, it is meant to work like that, it's not a bug, I know, it's not like windows etc. etc.

Make ctrl-t ctrl-t send a ctrl-t. That's a little easier to type reliably than taking a finger of ctrl.

I know what you are saying but in this case I really prefer to keep the same type of key code sequence for all key commands to keep it consequential, simple and clean. Besides, the need for sending the raw ctrl-t key code is such a rare event that the vast majority of user fingers out there will never be affected by this non-standard Vulcan death grip ;)

Actually it's something I use quite a bit. I often have the situation where I'm connecting to a device through a serial console, then using it to connect to another serial console. Basically a management interface that allows access to a lot of other serial ports. Then I want to send a break to one of those ports. That's ctrl-t t b. And I often find I don't release the ctrl key quick enough. I'd add it as an extra, not in place of just using 't'.

And the 'weird shit' with the status variable in the function? Other than that, nice clean code.

Thanks.

Actually the 'status' thing is not weird :) Think about it. A function returns a value that represents its status code. Hence I prefer 'status' over any of the less meaningful shorthand abbreviations 'r', 'ret', 'retval' etc.. Anyway, by the end of the day it's just a matter of coding style - generally I prefer not keeping everything super abbreviated just to save  a few characters and in the process muddle the expressed semantics :)

Calling a status variable 'status' isn't weird. Declaring it with attribute(unused), then assigning a value to it then never checking that value is weird. It just stood out as odd compared to the rest of the code. I see you've removed most of that now but the unused variable is still there.
 

Offline donmr

  • Regular Contributor
  • *
  • Posts: 155
  • Country: us
  • W7DMR
Re: tio - A simple TTY terminal I/O application for linux
« Reply #31 on: October 01, 2017, 04:41:40 pm »
My standard tool for this is miniterm.py which comes as part of the Python Serial package on Debian based distros.
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #32 on: October 01, 2017, 05:27:48 pm »
I'm really only asking for better error messages on failure to open the serial port. When you get EACCESS, check the uid for root and report that. Checking the group is a little harder, you'd have to look at the group of the tty device you're trying to open. But it's not that hard. Certainly don't do a hard coded group name.

Something like:
"Failed to open tty: did you mean to run this as root"
or
"Failed to open tty: you need to be in the 'dialout' group"

Rather than just the unhelpful "Permission denied" error. Which people usually respond to by running the same command again as root, and get the same error (for a different reason). Then I have to explain yet again that yes, it is meant to work like that, it's not a bug, I know, it's not like windows etc. etc.

Well, I guess we could raise the bar from the standard error messages we are used to. Nothing wrong in helping the user along. I'll add it to my TODO list.

Calling a status variable 'status' isn't weird. Declaring it with attribute(unused), then assigning a value to it then never checking that value is weird. It just stood out as odd compared to the rest of the code. I see you've removed most of that now but the unused variable is still there.

Oh, I got ya. Yeah, it was a stale code line - I just removed it. It escaped me because it was defined as unused to suppress an older version of gcc complaining unnecessarily because there was one code path that didn't use it.

Btw. tio is fully open source so code contributions via github are welcome ;)
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #33 on: October 01, 2017, 05:49:55 pm »
My standard tool for this is miniterm.py which comes as part of the Python Serial package on Debian based distros.

I'm sorry but if you want your application to be a first class citizen then you simply don't write a tool like this in Python. I mean, it is okay if you accept (i don't) the usual problems with Python version incompatibilities, missing package dependencies, bigger CPU execution overhead, and the horribly long winded list of error messages facing a user when even a simple exception occurs. Thanks but no thanks ;)
« Last Edit: October 01, 2017, 05:51:28 pm by lundmar »
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 
The following users thanked this post: enz

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: tio - A simple TTY terminal I/O application for linux
« Reply #34 on: October 01, 2017, 07:35:49 pm »
I'm sorry but if you want your application to be a first class citizen then you simply don't write a tool like this in Python. I mean, it is okay if you accept (i don't) the usual problems with Python version incompatibilities, missing package dependencies, bigger CPU execution overhead, and the horribly long winded list of error messages facing a user when even a simple exception occurs. Thanks but no thanks ;)

If you want your app to be first class citizen then you learn to use the tools properly and none of the above will be an issue.

I guess you have never heard about "freezing", building binaries and/or installers with Python if you don't want to or can't rely on the platform Python installation. CPU overhead in a terminal app? Ehm ... You can write heavy computer vision stuff in Python - with real time performance. Just need to know how. And long winded error messages? I guess someone has never heard about error handling ...

(writing commercial Python applications is part of my day job)
« Last Edit: October 01, 2017, 07:44:33 pm by janoc »
 
The following users thanked this post: NiHaoMike, nugglix

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
Re: tio - A simple TTY terminal I/O application for linux
« Reply #35 on: October 01, 2017, 07:41:19 pm »
root isn't allowed to open serial ports by default

What dain bramage is this?
 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: tio - A simple TTY terminal I/O application for linux
« Reply #36 on: October 01, 2017, 07:45:28 pm »
root isn't allowed to open serial ports by default

What dain bramage is this?

That's wrong, I think the author has been a bit confused. That would be really bad if the root user was unable to open a serial port. Root certainly is allowed to open any device they want (now whether it is a good idea is another story).

Running the application as root is a bad idea for another reason, though - any security hole or bug in it means a potential system compromise.

The proper way, at least in Linux, is to add the normal non-privileged user to the group allowed to use serial ports. E.g. on my machine it is the "dialout" group. For USB devices (the various UART-to-USB converters like FT232 and such) it may be required to add an UDEV rule granting the permissions to the currently logged in user. However, most popular desktop Linux distributions have this preconfigured and there is no need to hack anything like that for a regular user.

« Last Edit: October 01, 2017, 07:55:09 pm by janoc »
 

Offline Monkeh

  • Super Contributor
  • ***
  • Posts: 7992
  • Country: gb
Re: tio - A simple TTY terminal I/O application for linux
« Reply #37 on: October 01, 2017, 07:46:03 pm »
root isn't allowed to open serial ports by default

What dain bramage is this?

That's wrong, I think the author has been a tad confused.

Or he's using one very warped distro.
 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: tio - A simple TTY terminal I/O application for linux
« Reply #38 on: October 01, 2017, 07:58:35 pm »
Or he's using one very warped distro.

That would have to be explicitly forbidden by some security mechanism like GRSecurity or something - definitely not standard. Otherwise nobody could open the device - all demons like UDEV that grant permissions to the devices in Linux run as root ...

 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #39 on: October 01, 2017, 09:14:32 pm »
I'm sorry but if you want your application to be a first class citizen then you simply don't write a tool like this in Python. I mean, it is okay if you accept (i don't) the usual problems with Python version incompatibilities, missing package dependencies, bigger CPU execution overhead, and the horribly long winded list of error messages facing a user when even a simple exception occurs. Thanks but no thanks ;)

If you want your app to be first class citizen then you learn to use the tools properly and none of the above will be an issue.

I guess you have never heard about "freezing", building binaries and/or installers with Python if you don't want to or can't rely on the platform Python installation. CPU overhead in a terminal app? Ehm ... You can write heavy computer vision stuff in Python - with real time performance. Just need to know how. And long winded error messages? I guess someone has never heard about error handling ...

(writing commercial Python applications is part of my day job)

Sure, I know how Python works but for tools like this there is simply no tolerance for binary installers - it must be compilable from source across various different platforms and this is where you hit all the well known issues with the variation of various Python installations. These issues becomes even worse in a cross compilation scenario. Sure, things has become more stable with Python 3.x but it is still a bit of a mess.

Of course CPU overhead is not an issue in a terminal tool but from a general perspective pure Python will always be much slower than what can be done in C/C++, regardless of the Python code being scripted or compiled. Sure, Python can make do for some real time performance applications if your computer is fast enough. However, if you care about performance and need to make maximum use of your available CPU cycles and memory resources then Python is simply not a good option.

And you are right, if you write proper error handling in Python then you won't see the nasty standard exception messages with long winded call stacks. Unfortunately, many Python script kiddies don't do that.

And then there is the whole discussion on whether one thinks Python is a beautiful script language or not - personally I can't stand how it uses indentation to delimit code blocks but that is very much a personal opinion.
« Last Edit: October 01, 2017, 09:17:02 pm by lundmar »
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 
The following users thanked this post: Karel

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: tio - A simple TTY terminal I/O application for linux
« Reply #40 on: October 02, 2017, 09:00:27 am »
Sure, I know how Python works but for tools like this there is simply no tolerance for binary installers - it must be compilable from source across various different platforms and this is where you hit all the well known issues with the variation of various Python installations. These issues becomes even worse in a cross compilation scenario. Sure, things has become more stable with Python 3.x but it is still a bit of a mess.

In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.

And for Windows and Mac? You have to be kidding. If you don't provide prebuilt binaries, nobody will bother because building software on Windows is such a pain, especially if there are other utilities fulfilling the same function available. You claim to want to be cross-platform compatible - and then you use the autoconf build system which is notoriously difficult to make work in Windows and pain in the butt on Mac (now, your code may not even work in Windows at all due to the API differences, but that's another subject). So your argument kinda flies out of the window there unless by "cross-platform" you mean the Microsoft kind of cross-platform - "it is cross-platform as long as the platform is Windows", only for Linux distributions instead.

Of course CPU overhead is not an issue in a terminal tool but from a general perspective pure Python will always be much slower than what can be done in C/C++, regardless of the Python code being scripted or compiled. Sure, Python can make do for some real time performance applications if your computer is fast enough. However, if you care about performance and need to make maximum use of your available CPU cycles and memory resources then Python is simply not a good option.

That argument can be trivially debunked too. What you don't realize is that in most cases Python acts only as a sort-of "glue", putting together calls to native C/C++/Fortran/GPU/etc. libraries. Of course, if your benchmark is how fast I can run a loop printing numbers, C is likely going to be faster. But that is not a realistic benchmark at all because that's not how Python is commonly used.

For example, a well written Numpy numerical simulation code will run circles about a naive C/C++ implementation, because Numpy does things like code vectorization for you and has tons of heavily optimized algorithms behind the scenes. And I am not even talking about libraries like Numba, that generate optimized code using the llvm backend. You can see for yourself in this article:

https://www.ibm.com/developerworks/community/blogs/jfp/entry/A_Comparison_Of_C_Julia_Python_Numba_Cython_Scipy_and_BLAS_on_LU_Factorization?lang=en

And you are right, if you write proper error handling in Python then you won't see the nasty standard exception messages with long winded call stacks. Unfortunately, many Python script kiddies don't do that.

True. However, we are talking about professional developers, not script kiddies? A "kiddy" developer will be hard pressed to do proper error handling in any language.

And then there is the whole discussion on whether one thinks Python is a beautiful script language or not - personally I can't stand how it uses indentation to delimit code blocks but that is very much a personal opinion.

That's pretty much a subjective issue. It used to bug me too but one gets used to it quickly because the indentation falls exactly where you would indent code in C/C++ as well, so you don't think about it after a while. Just don't mix spaces and tabs in one file, that's a pain in the butt.
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #41 on: October 02, 2017, 01:05:53 pm »
In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.

Well, I understand you don't care much about the packager and end user experience. I'm not going to subject my package maintainers to the pains of maintaining any Python dependencies for my software. And end users having to pip install anything is causing them unnecessary pains not the mention the problems of which major python version they need to pick. I simply will not go down that road.

And for Windows and Mac? You have to be kidding. If you don't provide prebuilt binaries, nobody will bother because building software on Windows is such a pain, especially if there are other utilities fulfilling the same function available. You claim to want to be cross-platform compatible - and then you use the autoconf build system which is notoriously difficult to make work in Windows and pain in the butt on Mac (now, your code may not even work in Windows at all due to the API differences, but that's another subject). So your argument kinda flies out of the window there unless by "cross-platform" you mean the Microsoft kind of cross-platform - "it is cross-platform as long as the platform is Windows", only for Linux distributions instead.

Yes, my focus is mostly GNU/Linux systems and when I said cross platform I mean for example Linux vs BSD vs Hurd vs odd GNU distributions etc.. However, adding Windows to the mix does not make it easier to distribute applications with dependency on Python. In Windows you don't have package managers the same way we have in Linux so users end up having to sort out dependencies themselves and make sure to install the correct major version of Python to make things work and after that install any missing Python modules whatever they might be. Normal Windows users can run into all sorts of trouble with these things and some do. You can dismiss it if you like but that is reality for you.

I would never choose such solution for a professional application but that is my opinion.

That argument can be trivially debunked too. What you don't realize is that in most cases Python acts only as a sort-of "glue", putting together calls to native C/C++/Fortran/GPU/etc. libraries. Of course, if your benchmark is how fast I can run a loop printing numbers, C is likely going to be faster. But that is not a realistic benchmark at all because that's not how Python is commonly used.

For example, a well written Numpy numerical simulation code will run circles about a naive C/C++ implementation, because Numpy does things like code vectorization for you and has tons of heavily optimized algorithms behind the scenes. And I am not even talking about libraries like Numba, that generate optimized code using the llvm backend. You can see for yourself in this article:

https://www.ibm.com/developerworks/community/blogs/jfp/entry/A_Comparison_Of_C_Julia_Python_Numba_Cython_Scipy_and_BLAS_on_LU_Factorization?lang=en

Please notice that I said that _pure_ Python will always be much slower than C/C++. Yes, Python can make use of all sorts of precompiled support libraries/modules and then become faster that way but that, in my opinion, goes against the point of using a scripting language for writing an application. Then I would much rather write it all in C/C++ and not bother with any of the downsides of Python. Also, with Python binary modules you can run into the problem of them not being precompiled and available on various platforms and then you will have to compile and install them yourself. This of course does not apply to jit Python modules but here you pay a performance cost at runtime instead.

One specific Numpy numerical calculation performance test is not very convincing and the use case for Numpy is very limited to scientific computing. There are numerous benchmarks available online that show how C/C++ is generally a factor or more faster in most scenarios. Even with clever jit technologies like Numba it is, in general, nowhere near as fast as C/C++. Sure, in some very specific benchmarks it can get close but in general use no. In the end Python/cython/numba/numpy etc. can't beat the fact that in C/C++ you have full control of cache lines and memory layout and with these mechanism you can achieve the best possible performance.

I'm the kind of developer that cares about performance in any context and I'm not going to compromise my performance by writing my applications in Python and then have to depend on various Python modules to minimize the performance gap. That is my opinion, and in professional context I feel even stronger about this.

True. However, we are talking about professional developers, not script kiddies? A "kiddy" developer will be hard pressed to do proper error handling in any language.

Yet, many Python applications you find still crash with the default Python type stack trace - it's not pretty.

That's pretty much a subjective issue. It used to bug me too but one gets used to it quickly because the indentation falls exactly where you would indent code in C/C++ as well, so you don't think about it after a while. Just don't mix spaces and tabs in one file, that's a pain in the butt.

Yes, very subjective. However, I really care about syntax and I find the lack of braces a poor language design choice. Also, never use tabs, only spaces ;)

Anyway, I think we simply have to agree to disagree on this topic. This thread is supposed to be all about tio ;)
« Last Edit: October 02, 2017, 06:05:08 pm by lundmar »
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3785
  • Country: de
Re: tio - A simple TTY terminal I/O application for linux
« Reply #42 on: October 02, 2017, 08:12:57 pm »
In Linux this only matters for distro packagers. If a regular user downloads a source package instead of precompiled one from their distro's repository, any dependencies are just "pip install xxx" away and then they should know how to build the software anyway.

Well, I understand you don't care much about the packager and end user experience. I'm not going to subject my package maintainers to the pains of maintaining any Python dependencies for my software. And end users having to pip install anything is causing them unnecessary pains not the mention the problems of which major python version they need to pick. I simply will not go down that road.


I am not asking you to - re-writing your terminal emulator in Python just for the sake of it would be obviously silly.

I am not sure how did you determine that "I don't care about the packager or end user experience". Python packages and software are routinely packaged for distros with no issues. If the package has proper build using distutils, it is no problem whatsoever. E.g. my Mageia 6 has over 1000 Python packages listed in the repository, including major software like GnuRadio, the entire Numpy/Scipy stack, etc. Packaging Python applications is no different from packaging C/C++ - you have to handle dependencies there as well.

And end-users are not supposed to build from source (even if it is not difficult at all), I think I have been rather explicit about it.

Yes, my focus is mostly GNU/Linux systems and when I said cross platform I mean for example Linux vs BSD vs Hurd vs odd GNU distributions etc.. However, adding Windows to the mix does not make it easier to distribute applications with dependency on Python. In Windows you don't have package managers the same way we have in Linux so users end up having to sort out dependencies themselves and make sure to install the correct major version of Python to make things work and after that install any missing Python modules whatever they might be. Normal Windows users can run into all sorts of trouble with these things and some do. You can dismiss it if you like but that is reality for you.

I would never choose such solution for a professional application but that is my opinion.

Whether I like it or not is beside the point (btw, I am a Linux user since 1994). If you deal with software for business, you will have to deal with Windows too, that's just a fact of life. I had to deal with various Unix systems over time, but yet have to see anyone using a Hurd. So it is cool you are thinking about portability to it (seriously, your Unix/Linux TTY apis are going to work on Hurd? Ehm ...)

Just this week I had to actually prepare some software for a Mac, even. Some clients use even that.

Re package management - pip & anaconda work just fine in Windows. Anyhow, you are missing the point again - you don't distribute your application as just source code for Windows but build a binary so that the user doesn't need to build it themselves. Yes, that is possible with Python. Then you have a normal, self-contained exe file.

I am not dismissing anything, my point is that you are making a portability argument - and use an ancient build system that is pretty much making cross-platform (porting from one Linux to another Linux is not really cross-platform in my book) portability impossible. If you were using something like CMake it would be more believable.

Please notice that I said that _pure_ Python will always be much slower than C/C++. Yes, Python can make use of all sorts of precompiled support libraries/modules and then become faster that way but that, in my opinion, goes against the point of using a scripting language for writing an application.

What? That's a bit like saying using the standard C library goes against the point of using a C language for writing an application ... I didn't know that only writing everything from scratch is the acceptable form.

One specific Numpy numerical calculation performance test is not very convincing and the use case for Numpy is very limited to scientific computing.

Right. So go tell that to folks like Google or Facebook that are building most of the deep-learning stuff using this. Or to financial analysts building investment plans using tools like Numpy & Pandas, etc. Or people doing any sort of data mining (practically everyone today - Numpy is free and much faster than Matlab which used to be the tool of choice).

There are numerous benchmarks available online that show how C/C++ is generally a factor or more faster in most scenarios. Even with clever jit technologies like Numba it is, in general, nowhere near as fast as C/C++. Sure, in some very specific benchmarks it can get close but in general use no. In the end Python/cython/numba/numpy etc. can't beat the fact that in C/C++ you have full control of cache lines and memory layout and with these mechanism you can achieve the best possible performance.

I'm the kind of developer that cares about performance in any context and I'm not going to compromise my performance by writing my applications in Python and then have to depend on various Python modules to minimize the performance gap. That is my opinion, and in professional context I feel even stronger about this.

The problem is that what you are saying is relevant if you are writing close to the metal application (btw, I do wonder how you perform a "full control of cache lines" - you can at best ask for continuous memory allocation and certain alignment). None of this is at all relevant for an application which is spending 99.9% of its time waiting for a character to appear on a file descriptor - such as a terminal emulator. Or pretty much any application that has an UI.

What matters a lot, though, is developer's productivity, because it is directly related to how costly (or not) is the project for the company. The entire time is money etc. thing. Once your job becomes writing complex algorithms that involve a lot of math, networking or anything else not covered by the standard C/C++ libraries you will start to appreciate stuff like a good set of libraries and an expressive programming language (be it Python, Julia, Haskell, C# or whatever else).

Why do you think Java and C# became so popular? They have large runtimes, Java is terribly verbose and both are horrible for anything system level. However, both have enormous libraries of code available that make handling common tasks a breeze in them. Try to do e.g. any sensible (aka complex) networking in C/C++ without e.g. Boost or something like 0MQ or ACE and then do the same in C# only using standard libraries and you will see what I am talking about.

I have nothing against C or C++ and I still write tons of code in them but treating them as a sort of holy grail that nothing ever comes close to is totally counterproductive. I could spend a week or two writing an embedded web server in C++ for an application. Or I can write 5 lines of Python and be done in 30 minutes and move on to solving actual problems that the client actually pays me money for.

If I have learned something over my career, it was not becoming an expert in a programming language. A much more important skill is using the right tool for the job and keeping eyes open to new things instead of being stuck on incorrect assumptions because you have seen something in the past and didn't like it.

Yet, many Python applications you find still crash with the default Python type stack trace - it's not pretty.

Yes and a lot of C/C++ applications crash with the segmentation fault error or a bus error. That is even less pretty because nobody has any clue why. But perhaps it is more acceptable because users are used to applications just closing on them? That's really a silly point to make, IMO.
« Last Edit: October 02, 2017, 08:21:01 pm by janoc »
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #43 on: October 03, 2017, 03:15:46 pm »
I am not asking you to - re-writing your terminal emulator in Python just for the sake of it would be obviously silly.

Of course not - that would indeed be very silly.

I am not sure how did you determine that "I don't care about the packager or end user experience". Python packages and software are routinely packaged for distros with no issues. If the package has proper build using distutils, it is no problem whatsoever. E.g. my Mageia 6 has over 1000 Python packages listed in the repository, including major software like GnuRadio, the entire Numpy/Scipy stack, etc. Packaging Python applications is no different from packaging C/C++ - you have to handle dependencies there as well.

And end-users are not supposed to build from source (even if it is not difficult at all), I think I have been rather explicit about it.

I'm not saying end users should build from source. I meant for package maintainers it is important and they will have to deal with missing or out of date dependencies. From a package maintainers point of view, having lived through the painful years of the migration from Python 2.x to 3.x, there have been many distribution and stability issues. But things hare clearly stabilized today with Python 3.x and that is good - I'm just still not a fan.

Of course there are no issues for end users if distributed well. On Linux there are rarely issues if users make use of their systems native package manager as it will take care of all dependencies and these dependencies will be shared among applications so you can keep your package smallest possible in size. However, on Windows you will have to bundle all your dependencies or rely on eg. setuptools to download your python dependencies to make the end user experience a good one and even then you have to make the decision to bundle/download a full Python installation or rely on the end user installing the correct up-to-date version of Python/Anaconda (2.x vs 3.x). That's fine, it's just not the stuff I prefer to deal with when distributing applications.

Whether I like it or not is beside the point (btw, I am a Linux user since 1994). If you deal with software for business, you will have to deal with Windows too, that's just a fact of life. I had to deal with various Unix systems over time, but yet have to see anyone using a Hurd. So it is cool you are thinking about portability to it (seriously, your Unix/Linux TTY apis are going to work on Hurd? Ehm ...)
Great, and I've been a Linux hacker since 1995 and have been working with embedded Linux professionally for many years but that is irrelevant for this discussion. The point is cross-platform can be more than just Windows vs. Linux vs Mac.. And yes, eventually I plan to make tio also work with GNU Hurd. Hurd is an interesting OS and good work is being put into it by the GNU people and others. It's microkernel design is superior to Linux in important ways but that is an entirely different discussion we don't need to engage here.

Re package management - pip & anaconda work just fine in Windows. Anyhow, you are missing the point again - you don't distribute your application as just source code for Windows but build a binary so that the user doesn't need to build it themselves. Yes, that is possible with Python. Then you have a normal, self-contained exe file.

You keep misrepresenting what I'm saying. Of course you don't distribute source code to end users, especially not on Windows. And yes, for Windows, I certainly prefer to distribute applications as a self-contained exe with no external dependencies. Of course I know, you can do that for Python applications too but as I mentioned above you will have to decide whether the end user will have to bother with installing Python 2.x or 3.x as a separate installation or if you will bundle a potentially duplicate installed Python which will increase installation size or you will have to compile all your python into executable so that it does not depend on Python at runtime. I'm simply not a fan of any of these mechanisms.

What? That's a bit like saying using the standard C library goes against the point of using a C language for writing an application ... I didn't know that only writing everything from scratch is the acceptable form.

No, you are missing my point. What I'm saying is that if I'm going to write a professional self-contained application with performance in mind then my preferred choice is to write it in eg. C/C++ from the get go and reap the performance gains immediately across my entire application. And of course never write everything from scratch. There are so many high quality and well abstracted C/C++ libraries available out there that even doing complex stuff has become trivial. Sure, it takes a little longer to write a C/C++ application but not as much as people think. This is my professional preference. Of course, if I'm putting something together quickly and performance is not first priority then yes I use Python or whatever other higher level language that fits the job.

Right. So go tell that to folks like Google or Facebook that are building most of the deep-learning stuff using this. Or to financial analysts building investment plans using tools like Numpy & Pandas, etc. Or people doing any sort of data mining (practically everyone today - Numpy is free and much faster than Matlab which used to be the tool of choice).

And I think that is great - it's an improvement. It used to be that scientists preferred using good ol' antiquated Matlab in combination with optimized computational libraries typically written in C/C++ to gain better performance to this way avoid waiting weeks for their computations. Now, instead, they get to use Python, Lua etc. in combination with optimized computational libraries typically written in C/C++ to gain better performance. I mean, it is no coincidence that the core of Numpy is written in C.

The problem is that what you are saying is relevant if you are writing close to the metal application (btw, I do wonder how you perform a "full control of cache lines" - you can at best ask for continuous memory allocation and certain alignment). None of this is at all relevant for an application which is spending 99.9% of its time waiting for a character to appear on a file descriptor - such as a terminal emulator. Or pretty much any application that has an UI.

Not full control in the literal sense of course, but as good as it gets. There are many more tricks you can do with the C/C++ compiler, linker, and language constructs to make sure your most performance critical code can align and fit within the cache lines of your CPU and to maximize the chance it will be picked up by the cache mechanisms. Even moving code blocks around in the memory layout to trick the cache prefetcher includes surprising gains. It's almost an art form. This is one of the reasons optimized libraries are often written in C/C++.

What matters a lot, though, is developer's productivity, because it is directly related to how costly (or not) is the project for the company. The entire time is money etc. thing. Once your job becomes writing complex algorithms that involve a lot of math, networking or anything else not covered by the standard C/C++ libraries you will start to appreciate stuff like a good set of libraries and an expressive programming language (be it Python, Julia, Haskell, C# or whatever else).

Why do you think Java and C# became so popular? They have large runtimes, Java is terribly verbose and both are horrible for anything system level. However, both have enormous libraries of code available that make handling common tasks a breeze in them. Try to do e.g. any sensible (aka complex) networking in C/C++ without e.g. Boost or something like 0MQ or ACE and then do the same in C# only using standard libraries and you will see what I am talking about.

I'm not saying C/C++ is the one and only way. I think you should pick the right tool for the right job and I think we agree on that point. However, in case of C++ there are so many good libraries and UI toolkits available that helps a lot to speed up development to the point where you can justify its use.

In C++ it does not really matter that much if you use Boost or standard libraries. Fun fact, stuff conceived in Boost often end up in the standard libraries.

I have nothing against C or C++ and I still write tons of code in them but treating them as a sort of holy grail that nothing ever comes close to is totally counterproductive. I could spend a week or two writing an embedded web server in C++ for an application. Or I can write 5 lines of Python and be done in 30 minutes and move on to solving actual problems that the client actually pays me money for.

If I have learned something over my career, it was not becoming an expert in a programming language. A much more important skill is using the right tool for the job and keeping eyes open to new things instead of being stuck on incorrect assumptions because you have seen something in the past and didn't like it.

Again we agree, the right tool for the right job. However, I will say that becoming an expert in a specific language does carry a lot of merit for many jobs but I think it is more important to simply be an expert programmer that is familiar with various programming paradigms and techniques because that makes it possible to quickly adapt and use new languages fitting the job at hand.

Yes and a lot of C/C++ applications crash with the segmentation fault error or a bus error. That is even less pretty because nobody has any clue why. But perhaps it is more acceptable because users are used to applications just closing on them? That's really a silly point to make, IMO.

It does require some level of discipline to introduce sensible error handling in either language. I just wish Python wasn't so noisy in its default verbosity.
« Last Edit: October 03, 2017, 06:20:33 pm by lundmar »
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline HoracioDos

  • Frequent Contributor
  • **
  • Posts: 344
  • Country: ar
  • Just an IT monkey with a DSO
Re: tio - A simple TTY terminal I/O application for linux
« Reply #44 on: October 07, 2017, 01:11:15 pm »
Hi.
Until now I was used to cu and screen. Now I'm trying Tio. For Gui: CuteCom, Moserial and CoolTerm work fine. For terminal: Tilix
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #45 on: October 07, 2017, 08:04:13 pm »
Hi.
Until now I was used to cu and screen. Now I'm trying Tio. For Gui: CuteCom, Moserial and CoolTerm work fine. For terminal: Tilix

 :-+

You have mentioned that your current GNU/Linux distribution is Mint.

I've talked to the Debian package maintainer of tio and he will soon upgrade it to latest version. Since Mint is a Debian derivative you should soon get access to the latest version via your package manager.
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline HoracioDos

  • Frequent Contributor
  • **
  • Posts: 344
  • Country: ar
  • Just an IT monkey with a DSO
Re: tio - A simple TTY terminal I/O application for linux
« Reply #46 on: October 07, 2017, 08:38:09 pm »
I've talked to the Debian package maintainer of tio and he will soon upgrade it to latest version.

Double  :-+ . Mint has a debian edition.

Since Mint is a Debian derivative you should soon get access to the latest version via your package manager.
Mint is an Ubuntu derivative mainly. Ubuntu and Debian package maintainers teams and repos are different. I don't know how they talk to each other.
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #47 on: October 07, 2017, 08:57:38 pm »
Mint is an Ubuntu derivative mainly. Ubuntu and Debian package maintainers teams and repos are different. I don't know how they talk to each other.
True, they are not exactly the same. Mint is a derivative of Ubuntu. Ubuntu is a derivative of Debian. They all pick and choose and customize as they need. That being said, they are all generally the same - mainly a Debian derivative. Changes/additions in Debian usually trickle down to the other derivatives after some time.
« Last Edit: October 07, 2017, 08:59:20 pm by lundmar »
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 

Offline radar_macgyver

  • Frequent Contributor
  • **
  • Posts: 698
  • Country: us
Re: tio - A simple TTY terminal I/O application for linux
« Reply #48 on: October 07, 2017, 09:18:04 pm »
Thanks for this, handling USB to serial adapter disconnects smoothly is an option that helps a lot when dealing with embedded targets that need reboots.

With the serial port closed you should be able to scroll back and forth through the history. Hyperterminal does all that so I'm still using it through a VM because there simply isn't a Linux alternative.

I've used picocom in the past, and when you quit, you can use your terminal emulator's scrollback buffer to view what came out of the serial port. I'm guessing tio does the same.
 

Offline lundmarTopic starter

  • Frequent Contributor
  • **
  • Posts: 436
  • Country: dk
Re: tio - A simple TTY terminal I/O application for linux
« Reply #49 on: October 07, 2017, 11:05:23 pm »
Thanks for this, handling USB to serial adapter disconnects smoothly is an option that helps a lot when dealing with embedded targets that need reboots.

I think it is a useful feature, especially if you are connecting/disconnecting various boards during the day in your work or hacking sessions. You simply start tio for your serial device and leave it there for the rest of the day watching it automatically connect and reconnect. Most other terminal tools require some sort of restart for this scenario.

NOTE: Make sure to use a serial tty device found in /dev/serial/by-id/ otherwise it might pop up as a different tty device file when reconnected and then tio will not find it again.


I've used picocom in the past, and when you quit, you can use your terminal emulator's scrollback buffer to view what came out of the serial port. I'm guessing tio does the same.

Absolutely, tio is intentionally made so it does not get in your way by messing with your terminal using fancy ncurses features and buffers etc.. When you quit tio you can use your terminals scroll buffer to view all your history only limited by your terminals scroll buffer size.

If one does not want to use scroll buffers then tio also supports logging to a file.

Btw, Mac Gyver forever! ;)
« Last Edit: October 08, 2017, 02:57:58 pm by lundmar »
https://lxi-tools.github.io - Open source LXI tools
https://tio.github.io - A simple serial device I/O tool
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf