Doing a bit of file archival work and we have a few public-facing Apache file directories like in this screenshot.
(https://tecadmin.net/wp-content/uploads/2019/05/apache-directory-listing.png)
The directory structure is pretty simple but there are many of them.
The files are pretty standard but some are text files and some are binary.
What I would like to do is basically clone those directories onto my local storage.
I have tried WGET so far with some success.
wget -m -p -E -k -K -np "http://domain.ltd/dir/"
As well as an assortment of flags.
But I am finding that it does not download the txt files correctly and sometimes the binary files are downloaded incorrectly also.
Does anyone have a better solution?
I don't need to generate the index pages either.