Author Topic: Show definitions + autocomplete function parameters for C (text editor over ssh)  (Read 4274 times)

0 Members and 2 Guests are viewing this topic.

Offline RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6381
  • Country: ro
What editor/setup to use when writing C code remotely, after connecting with ssh to a headless Raspberry Pi (RPi)?


The remote PC is a small SBC, a Raspberry Pi 1B running Debian 12 Bookworm.  Though, that's a RPi model from long time ago, its hardware is very modest:  700MHz single core ARM v6 (32-bit) with 256MB RAM.

For now, I am logging in to the RPi remotely, in text mode, using screen+ssh from a Desktop, and edit the sources with vim.  Apart from the gazillion keyboard shortcuts that I keep forgetting, I can only get autocomplete suggestions for names, but it does not show the function parameters, and CTRL+] doesn't jump to definition.

Stackoferflow tells that, to see definitions with vim, I will need first to create a tags database with ctags -R, for each project.  OK, but isn't this a little cumbersome?  Should I use something else for editing C?  This setup is for learning only, many projects but all very small, usually just a single .c and a Makefile.

Do I use the right setup?  Is it maybe something else other than vim?  It must be light with the hardware requests, but with syntax coloring and decent autocomplete.


I've read the other alternative to ctags is a client server architecture, with background services indexing the sources and running at all times, something invented by Microsoft and open sourced, but that seems heavy for my RPi.  I've tried to install neovim with the kickstart setup on top (this setup has the client-server style for code autocomplete, plus some other goodies like fuzzy search, all pitched up in this 30 minutes video):
The Only Video You Need to Get Started with Neovim
TJ DeVries
https://youtu.be/m8C0Cq9Uv9o

Well, following the installation steps for neovim + kickstart, on my RPi it failed with segmentation fault.  ???  I've dd-ed the RPi SD card with the stock OS image again, for just in case the segmentation fault was caused by some malicious attempt (it probably wasn't that, but anyway).


The question is, what setup to use when editing small/hobby C projects on a headless RPi with ssh?
« Last Edit: April 17, 2024, 07:50:45 am by RoGeorge »
 

Online retiredfeline

  • Frequent Contributor
  • **
  • Posts: 550
  • Country: au
If the RPi is on the LAN where you have a Linux desktop you could export a directory to the RPi with NFS and do your editing on the desktop but the compile on the RPi. This will also backup your work with the desktop backup.
 
The following users thanked this post: RoGeorge

Offline RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6381
  • Country: ro
Interesting, didn't thought of doing that.  Looks like a clever hack, but I'm not sure what directories from RPi to mount.  I will need to build kernel modules, so for autocomplete to work properly, I would need the kernel headers from RPi.  I've noticed (during make) the kernel headers are accessed through a link in /lib/modules/build
Code: [Select]
ls -l /lib/modules/5.15.84+
total 2472
lrwxrwxrwx  1 root root     31 Mar  6  2023 build -> /usr/src/linux-headers-5.15.84+
drwxr-xr-x 11 root root   4096 Apr 14 13:50 kernel
-rw-r--r--  1 root root 582872 Mar  6  2023 modules.alias
-rw-r--r--  1 root root 612415 Mar  6  2023 modules.alias.bin
-rw-r--r--  1 root root  13930 Mar  6  2023 modules.builtin
-rw-r--r--  1 root root  25034 Mar  6  2023 modules.builtin.alias.bin
-rw-r--r--  1 root root  15282 Mar  6  2023 modules.builtin.bin
-rw-r--r--  1 root root  77245 Mar  6  2023 modules.builtin.modinfo
-rw-r--r--  1 root root 221390 Mar  6  2023 modules.dep
-rw-r--r--  1 root root 298899 Mar  6  2023 modules.dep.bin
-rw-r--r--  1 root root    324 Mar  6  2023 modules.devname
-rw-r--r--  1 root root  66966 Mar  6  2023 modules.order
-rw-r--r--  1 root root    913 Mar  6  2023 modules.softdep
-rw-r--r--  1 root root 262276 Mar  6  2023 modules.symbols
-rw-r--r--  1 root root 318778 Mar  6  2023 modules.symbols.bin

I don't know how the autocomplete works, and if it is blindly indexing any symbols it sees in any source file, or if the symbols indexing is made context aware.

For example here, do I mount /usr/src/linux-headers-5.15.84+, or /lib/modules/5.15.84+/build?  Or it doesn't matter at all from which point the sources are mounted, as long as they are all seen by the symbols indexer?
« Last Edit: April 17, 2024, 08:26:35 am by RoGeorge »
 

Offline RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6381
  • Country: ro
you could export a directory to the RPi with NFS and do your editing on the desktop but the compile on the RPi

Starting from your NFS export idea, next wish was to not wear out the SD card, so thought of exporting a directory from the PC instead, a directory where the RPi can write on a spinning rust HDD.  At this point, why not using the PC for compilation, too, to cross-compile for ARM 32-bit and just copy the binaries back to the RPi's SD card?

The setup to crosscompile for RPi is documented in https://www.raspberrypi.com/documentation/computers/linux_kernel.html#choosing_sources

So I've brought the kernel sources from the RPi github repo, and clone from the tag with the same kernel version I was using on the RPi https://github.com/raspberrypi/linux/releases/tag/1.20230106 and cross-compile the kernel in order to have the right version of RPi kernel-headers on the PC.  Since now it all happens on the PC, I don't need NFS, and instead just copy the cross-compiled binaries from the PC to the RPi, using scp to copy over SSH.

Then, I've added -j8 ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- to the make commands, and now, instead of 1-2 minutes for a compilation on RPi, cross-compiling the same thing on the PC only takes 1-2 seconds:D

As for code snippets and autocomplete, on the RPi the choice was very limited because of its modest hardware, but on the PC it can be almost anything.  I am tempted to try some of those fuzzy-search suggest as you type.  Not sure yet which one.
« Last Edit: April 18, 2024, 05:21:50 am by RoGeorge »
 

Online retiredfeline

  • Frequent Contributor
  • **
  • Posts: 550
  • Country: au
Sure, go for it, as you're evidently capable of installing a cross toolchain.
 

Offline dave j

  • Regular Contributor
  • *
  • Posts: 131
  • Country: gb
Starting from your NFS export idea, next wish was to not wear out the SD card, so thought of exporting a directory from the PC instead, a directory where the RPi can write on a spinning rust HDD.

I've setup most of my Pis to use an NFS root file system for just this reason. Earlier Pis still need an SD card containing the boot partition but versions 4 and 5 can boot directly from a network.
I'm not David L Jones. Apparently I actually do have to point this out.
 

Offline 5U4GB

  • Frequent Contributor
  • **
  • Posts: 421
  • Country: au
Another option if you don't want to get into X for opening edit windows on remote systems is to use an SSH client that allows local file editing of remote files, so it copies the data across, allows local editing with the editor of your choice, and then copies it back again when you save.  In effect it allows local editing of remote files.  I believe MobaXterm allows this.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
If the local machine is running Linux, then one can use the inotifywait command and e.g. sftp to transfer the modified files; say
    inotifywait -r -m -e CLOSE_WRITE --format '%w/%f' . | sed -ne 's|^\([^"]*\)$|put "\1"|p' | sftp sftp://user@host/path -i identityfile -b -
where host is the remote host name, user is your username on the remote host, identityfile is the path to the private key that authenticates user on host via SSH, and path is the (relative) path on the remote host.

Because this will transfer even temporary files used by your editor, you'd normally limit that to source files, though.  For example,
    inotifywait -r -m -e CLOSE_WRITE --format '%w/%f' . | sed -ne 's|^\([^"]*\.[ch]\)$|put "\1"|p' | sftp sftp://user@host/path -i identityfile -b -
For more complex stuff, you can do a Bash loop,
    inotifywait -r -m --format '%e %w/%f' . | while read Line ; do
        Path="${Line#* }"
        Events="${Line%% *}"
        for Event in ${Events//,/ }; do
            # 'Event' for 'Path' occurred
        done
    done
where "$Event" will be the uppercase event (most interesting ones being CREATE, DELETE, MOVED_FROM, MOVED_TO, and CLOSE_WRITE) and "$Path" the relative path to the file or directory.

For example, you could create the above as a script, piping its output in form of sftp commands to sftp.  If you put the loop inside an explicit subshell, you can trap HUP and INT signals to generate a quit command, ensuring final transfers are completed before the script exits.

(When the innermost comment is replaced with a Bash case "$Event" in ... esac statement generating corresponding sftp commands, I call the end result "pushing changes via sftp"; a kind of one-sided rsync.  I don't have a full one at hand right now, but if you want one, I can easily create and test one.)

Similarly, on the Linux SBC, you can run another inotifywait command to run different commands based on which files were most recently CLOSE_WRITE'd or MOVED_TO, for example make clean all.

Alternatively, you can use rsync (with or without inotifywait) to synchronize entire tree between two machines.  I've used Bash and Bourne and POSIX shells for so long I find creating the above kind of shell script snippets easier and faster than setting up actual services.  In all cases, I expect you use public key -based authentication on the SBC, instead of password authentication.  If you ever ask yourself "how can I provide a password to ssh/sftp", you need to switch to public key authentication instead.)
 
The following users thanked this post: DiTBho

Offline xvr

  • Frequent Contributor
  • **
  • Posts: 290
  • Country: ie
    • LinkedIn
I wish to add my voice for ssh based access. As for me I used FAR (https://www.farmanager.com/) for both ssh access and file edit. And putty for compilation and run.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
(In case it is not clear to everyone, sftp is part of standard server-side ssh services, dedicated for file transfers, while ssh proper is dedicated for terminal/pseudoterminal access.  sftp is NOT ftp-over-tls or ftp-over-ssh; it is a sub-protocol of SSH, "SSH secure file transfer protocol", to be specific.  Typical OpenSSH etc. installations enable it by default, with connections accepted by sshd, and sftp connections served by sftp-server automatically executed for each connection by sshd.)
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
inotify

This is great on { ext2, ext3, ext4, ... }, but needs kernel support, and needs it enabled.
Unfortunately things like BeFs (experimental) does not even have traditional inodes, so ... you have to look for alternatives.
And I remember having some problems with inotify on bcachefs, too.

so, best case scenario is with the ext* filesystem family, I guess.

I wouldn't even know if something like this works with "strange" filesystems, under FUSE, like { sshfs, ... }

The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
[inotify] but needs kernel support, and needs it enabled.
Unfortunately things like BeFs (experimental) does not even have traditional inodes, so ... you have to look for alternatives.
Eh? inotifywait has nothing to do with inodes, and in recent versions (git) uses the fanotify Linux kernel interface.  You definitely want to enable it.  It operates on the virtual filesystem level, i.e. kernel internal structures, and not on filesystem structures.  That is, it has nothing to do with inodes per se; all it requires is that the filesystem provides fsids, filesystem-unique identifier for each file and directory object in it.

It does not work on FUSE or remote file systems, because they do not monitor or report changes in the remote filesystem.  It is not a limitation of inotify/fanotify; it is a limitation of those file systems.

It is best to think of inotify/fanotify as providing notification of local syscalls affecting watched files and directories.  (Thus, other hosts accessing a shared network mount won't show up, nor will mmap(), mremap(), msync() or modifications via memory mapping.)

And I remember having some problems with inotify on bcachefs, too.
bcachefs is known to lock up in fsstress (March 2024), so I'd say bcachefs is still too buggy.

so, best case scenario is with the ext* filesystem family, I guess.
ext2, ext3, ext4, xfs, msdos/fat/vfat, exfat, cramfs should work absolutely fine.
I am not sure about zfs, and for btrfs (especially subvolumes) you'll want to include the recent bugfixes.



If you use nfs, fuse, or an experimental filesystem without inotify/fanotify support, or do not want to use inotify-tools for some reason, you can use a formatted find command to rescan the filesystem sub-tree at regular intervals, checking for changes in size or modification timestamp; via e.g.
    find rootdirs -printf '%T@ %s %p\0' > nul-separated-output-file

Personally, I prefer to write a small POSIX C program using nftw() to walk the directory tree(s), and a dynamic hash table (keyed on the DBJ Xor hash of the file path) to check for changes at regular intervals.  I would use a linked list of hash table entries, something along the lines of
Code: [Select]
typedef  struct entry  entry;
struct entry {
    struct entry *next;
    size_t hash;
    off_t  size;
    struct timespec  mtime;
    char  type;
    char  generation;  // Counter for detecting deletions
    char  path[];
};
where generation is incremented by one for each time it is seen by nftw(), so that after a full pass, deleted files can be detected.  The path element at end contains the full relative path to the file or directory, as a C99 and later flexible array member.  (Remember to allocate for and set the path-terminating NUL '\0' character.)

This kind of scanning can be run at idle priority via nice -n 19 ionice -t 3 command... .  The way Linux machines cache filesystem accesses, as long as there is enough RAM available compared to the number of items in the subtree such scanned, repeated scanning produces surprisingly low I/O and CPU load.  (The initial scan is slow and relatively high-load, though, because it loads the filesystem data in memory.)
 
The following users thanked this post: DiTBho

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
It does not work on FUSE or remote file systems, because they do not monitor or report changes in the remote filesystem.
It is not a limitation of inotify/fanotify; it is a limitation of those file systems.

Thanks for the clarification!
Tested before reading your answer: as expected inotifywait doesn't work on sshfs.
And It doesn't even work on experimental versions of BeFS, but I expected that. On Linux, without extra patches BeFS is RO.
No problem, I was curious about this.
« Last Edit: April 18, 2024, 05:46:48 pm by DiTBho »
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6419
  • Country: fi
    • My home page and email address
It does not work on FUSE or remote file systems, because they do not monitor or report changes in the remote filesystem.
It is not a limitation of inotify/fanotify; it is a limitation of those file systems.
Thanks for the clarification!
:-+

Tested before reading your answer: as expected inotifywait doesn't work on sshfs.
And It doesn't even work on experimental versions of BeFS, but I expected that. On Linux, without extra patches BeFS is RO.
Yep.

The kernel implementation is here, but one of the most interesting features –– only on ext[234] for now! –– is the FAN_FS_ERROR event type, for file system health monitoring; see the samples/fanotify/ sample in the Linux kernel sources for this.  Essentially, it allows a separate daemon to detect filesystem-related errors.

So, if possible, it would be nice to get fanotify support for BeFS and other filesystems.  (Doing a quick web search, it looks like OpenZFS/zfs also has some issues with fanotify; but seems to work with inotify, although heavy users may wish to increase the number of watches available.)

As a workaround, the polling method via nice'd+ionice'd find or nftw() uses somewhat more resources (memory and CPU time), but requires no kernel support; and in general, isn't such a resource hog to actually be annoying.
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 6954
  • Country: va
Do I use the right setup?

No. If you want to use decent development tools run the things on a decent development machine and then send the compiled executables to the target. The fad of having to develop on the target is, IMO, a fad that's a bit silly.

Well, you did ask  >:D
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14666
  • Country: fr
Do I use the right setup?

No. If you want to use decent development tools run the things on a decent development machine and then send the compiled executables to the target. The fad of having to develop on the target is, IMO, a fad that's a bit silly.

Well, you did ask  >:D

Yep, but if the OP is really uncomfortable with setting up a cross-compiler (or in case there isn't one, which is not the case here, but just saying), the closest reasonable approach is to edit your code in whatever programming editor/environment you like and only compile it on the target.
 

Offline RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6381
  • Country: ro
Do I use the right setup?

No. If you want to use decent development tools run the things on a decent development machine and then send the compiled executables to the target. The fad of having to develop on the target is, IMO, a fad that's a bit silly.

Well, you did ask  >:D

Yep, but if the OP is really uncomfortable with setting up a cross-compiler (or in case there isn't one, which is not the case here, but just saying), the closest reasonable approach is to edit your code in whatever programming editor/environment you like and only compile it on the target.

You mean, like this:  https://www.eevblog.com/forum/programming/show-definitions-autocomplete-function-parameters-for-c-(text-editor-over-ssh)/msg5455418/#msg5455418  ?  :P

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14666
  • Country: fr
Do I use the right setup?

No. If you want to use decent development tools run the things on a decent development machine and then send the compiled executables to the target. The fad of having to develop on the target is, IMO, a fad that's a bit silly.

Well, you did ask  >:D

Yep, but if the OP is really uncomfortable with setting up a cross-compiler (or in case there isn't one, which is not the case here, but just saying), the closest reasonable approach is to edit your code in whatever programming editor/environment you like and only compile it on the target.

You mean, like this:  https://www.eevblog.com/forum/programming/show-definitions-autocomplete-function-parameters-for-c-(text-editor-over-ssh)/msg5455418/#msg5455418  ?  :P

Yes, pretty much, but not exactly: in the post you mention, this is still cross-compiling, while I suggest an alternative option to just edit the code comfortably on a computer and compile it directly on the target, rather than cross-compile it.
« Last Edit: April 18, 2024, 10:33:46 pm by SiliconWizard »
 

Offline RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6381
  • Country: ro
About the question in the title, helping tool that can suggest not only function names, but suggests function arguments, too/  Couldn't find anything yet (that is not a full blown IDE like IntelliJ, CodeStudio, Eclipse).  It seems that ctags only shows names, without the function arguments.  Or maybe I don't know how to integrate ctags with vim, but I've searched and couldn't find how to make ctags to show arguments.

I need something similar with the animated gif shown here:  https://github.com/neoclide/coc.nvim

So far, all the tools I could find capable to show both the function name and their arguments and for a text-only editor, are based on LSP (Language Server Protocol).  This is very resource intensive, and the ones I've tried, installs nodejs and npm, which seem to me like a huge pile of software.  I've always avoided to install such things.

Maybe I'm asking for the wrong thing, but how do people were looking-up for the function's arguments before the LSP tools?  If they've only used ctags, with functions names only and nu arguments listed, how do they knew what the arguments are?  Did they jump to the function's definition than close that view, only to lookup the arguments?  Did they open and read the headers?  :-// That would be very time consuming.  What am I missing?



Maybe I'd better ask for a generic advice regarding programming style (instead of asking for a tool):

How do you discover the function names, their type and arguments, the already defined constants, names of the existing macros, etc.?
« Last Edit: April 20, 2024, 05:34:32 am by RoGeorge »
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 6954
  • Country: va
Quote
but how do people were looking-up for the function's arguments before the LSP tools?  If they've only used ctags

I've never used ctags (knowingly) - just a pain to keep up to date. My introduction to this kind of thing was Brief and every serious editor or IDE I've used since then has had to do pretty much the same thing in their own way. Maybe they've used ctags under the bonnet, but I'm pretty sure not. Current editor/IDE is Slickedit, which is definitely not free but (obviously) well worth the cost to me ;)

Anyway, the reason for mentioning that is because although SE has the parameter info you're after (but does it rather better, I think), I rarely use it. Instead, there is a 'preview' window which shows the context of whatever the cursor is hovering over. If it's a variable it will show the declaration; a define, it will show the definition, a function it will show the prototype, etc. The great thing about it is you can then double click whatever is there and there it is in an edit window (either the one you're working on, a new one or, as in my case, the second edit window). You just get used to it being there and showing the detail of what you're working on.

Difficult to describe but I attach a quick screeny so you can see how I have it. It's configurable but essentially it's just the few lines before and after whatever is of interest. You can scroll the window up and down if you need to but can't edit or do any of that stuff - it's a preview!

Maybe that's the kind of thing to look for instead of the in-place popups. Anyway, you did ask how others deal with this :)
 
The following users thanked this post: RoGeorge

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
I've never used ctags (knowingly) - just a pain to keep up to date

ctags works well in three cases
  • you can integrate backtracking mechanisms, in a text editor that implements "go to line", so you save the current position on the stack, query ctags to know which line to jump to, the line depends on where the function is defined, ang jump there, and then jump back
  • you can integrate a short-cut to regenerate the ctags database. you can tolerate that by modifying the file there begin to be some offsets between the position showing ctags and the real position, for a while they are tolerable because they only cost a few presses of the arrow keys to correct the position, then ... it can be regenerated . This saves CPU time as ctags takes time to regenerate the database
  • you can use two work windows, in one you edit the code, in the other you jump to the definition, or documentation (Doxygen)

The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline DiTBho

  • Super Contributor
  • ***
  • Posts: 3963
  • Country: gb
Quote
To Purchase SlickEdit Pro or Standard 2023 (v28)
Email sales@slickedit.com

Indicate the product you want to purchase "SlickEdit Pro" or "SlickEdit Standard"

Include your name and billing address

After SlickEdit processes your email, you will be sent an invoice that can be paid on-line.
After SlickEdit processes your payment, you will receive an email with your license file and a download link for the installers.
(link here)

so, how much is for it? let's check it out  :o :o :o
The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow
 

Offline xvr

  • Frequent Contributor
  • **
  • Posts: 290
  • Country: ie
    • LinkedIn
If you intended to edit code on Host and share it to RPi via nfs than you can use full featured IDE, like Visual Studio Code. It's free and works both on Windows and Linux. And open source.
It also includes plugins for remote development, just in case if you will be not happy with nfs  :-\
 

Offline RoGeorgeTopic starter

  • Super Contributor
  • ***
  • Posts: 6381
  • Country: ro
I've been burnt by VS code before.  Wrote some Arduino projects and got the extra time to heavily configure the VS code, adding a hardware debugger for Arduino, and automatically switching the AVR chip into debug-wire mode, etc.  A year later, when I've tried to open that project again, nothing was working any more.  The VS code has moved to higher versions, and broke backward compatibility with some of the extensions I was using initially.  >:(

Since then, didn't use VS code again.

Another thing about VScode, it is free and open source, but the binaries will have more telemetry than what it is documented and visible in its github sources.  Some proprietary customization is applied when the VS code binaries are built, and that customization is closed source and not documented.  It adds something more, not the telemetry that you can disable from VS code configuration files, something else that is still collecting data and do its things after disabling all known telemetry:  https://vscodium.com/

To avoid that, there is another binary distribution called VScodium, claiming to be the same as VS code, but without the undocumented and always-on extra spyware from Microsoft.
« Last Edit: April 20, 2024, 06:47:09 pm by RoGeorge »
 

Offline PlainName

  • Super Contributor
  • ***
  • Posts: 6954
  • Country: va
Quote
To Purchase SlickEdit Pro or Standard 2023 (v28)
Email sales@slickedit.com

Indicate the product you want to purchase "SlickEdit Pro" or "SlickEdit Standard"

Include your name and billing address

After SlickEdit processes your email, you will be sent an invoice that can be paid on-line.
After SlickEdit processes your payment, you will receive an email with your license file and a download link for the installers.
(link here)

so, how much is for it? let's check it out  :o :o :o

Yeah, their webby has just undergone drastic surgery so I guess they're not using it for online payments now. Normally this sort of messing about would put me off, so I'm glad I've been using it for 19 years and haven't stumbled across that before :)
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf