Hi Martin,
It sounds like you are on the right track (at least in my opinion). For a super simple approach, you could use FTP over SSL aka FTPS? There are tons of servers available for basically every operating system. It would be easy for you or a client to set up in their own infrastructure (Windows even supports this out of the box I think). Lots of FTP servers also have a GUI, particularly on Windows, so it would be easy for an IT guy to add or delete users. You can also integrate them into existing LDAP/Active Directory authentication systems. I'm not particularly fond of Windows, but lots of businesses use it, so might as well try to fit into their processes as much as possible
I would be more concerned about how you handle transfer interruptions, etc that leave you with corrupted data. TCP will make sure that your packets are not corrupted in transit, but it can't guarantee that the packets will make it to the endpoint if the wifi drops out. In my experience, 5-10MB per chunk is a lot of data to transfer over a potentially lossy connection. You will need some way to checksum the data on the server to see if it is valid, and I fear that will require writing some custom software. It doesn't appear that you can reliably get a hash out of S3 for a file on the server (well, you can get a "hash" via the ETag, but the algorithm used isn't always something that you can compute on both ends). There is a HASH command extension to FTP, and it does have experimental support in filezilla, but it's hard to tell how many servers actually support it.
There are many options for forwarding the data from FTP to S3/Google Cloud/etc if you really need that, for example:
https://www.thorntech.com/products/sftpgateway/With FTP you would also avoid vendor lock-in.
PS not to nitpick, but I think the more idiomatic expression is something along the lines of "But maybe this solution is overkill, and there is a simpler one available". I haven't heard "overkilling" being used before