There is a right way to do this. You provide developers and testers the full error trace, but redact it when displaying to end users. End users can't do anything about the difference between 400 and 500 errors anyway. It's a pretty general rule: show people messages that are actionable. That is different for developers vs. users.
Don't get me wrong I know many ways to fix it. The tricky part is I am not the one in control of it. The platform is customer side, on-prem, k8s. Sure we have input, but ultimately the architecture and infrastructure is already written in stone. Government projects, you wouldn't expect otherwise.
Truth be told the most common way, these days, is to just use a log aggregation service and transaction ids on requests so a query can pull back all related logs from all related services. I think we might even have that in the current project. I was just already in the k8s management UI so I could open the app logs directly.
Specific to my ironman however. We techies are surely familiar with the basic web auth login, like the one of this forum. Now, if you put the incorrect password in, it will not tell you that your password is incorrect. That would be "telling". It would validate that at least the username is correct.
In an enterprise (mciro)service architecture individual services all require authorisation. So when service A wants to request a data look up from Service B (say) it will have to provide proof of identity/authentication/authorisation details to service B.
Having each and every service handle ident/auth routines and data flows we typically move the whole thing "out of band" and use an authentication service. Sometimes called "IDAM". Any service can submit a pre-shared key, usually specific certificates, to the IDAM service and ask for a token for a role. As long as their credentials/certs are valid, they will receive an access token.
When service A now wants to request of service B, it sends the access token it requested from IDAM to service B. Service B then asks the IDAM server is the token is valid. (There are shortcuts with crypto validation that don't need a network hop). If the token is valid for the authorisation Service A actually requested it will be permitted.
Bring the web login back to mind now. Service A has something wrong with it's authorisation and ACL matrix. When service A requests it's token for Role_ABC it is denied. However, rather than deny the authorisation, the service will simple, ignorantly as all hell, send you a 200(ok) Empty response.
As a bad actor trying to brute force your way in, this will be exceedingly annoying and unfruitful.
However, it is also exactly that annoying and unfruitful in development.
Poor or non-existant error handling downstream of this often means the "Blank" token is then passed on verbatim to the request. So the actual failure manifests several steps along the chain from the cause.
Not all of the "poor or non-existant error handling" is the teams fault. A huge amount of all of the above is wrapped up in libraries. Those libraries all are auto-magically-voodoo-coupled/injected in amazingly complicated ways so that you don't need to write much buggy code in addition to what comes as "black magic" from the frameworks. That is until the thing detonates inside it's lovely opague black box and all the ugly complicated goo comes out in stack traces 150 deep in abstractions and it's some dumb as parsing call that throws an IllegalArgument in the bowls of spring when it tries to split a token on ".". Where exactly would you hook that error handling code into? (That is answerable, using event Filter chains etc. but digression).
The general ethos of development, I find, in enterprise, is to divide, compartmentalise and conquer. If it fails bin it. Move on. Because projects are split up into so many components, each one is fairly cheap to replace. So true diligence on "life expectancy" of a service is deinvested in favour of rapid application development techniques. In fairness, the amount of effort you would need to spend to "overhaul" a 2000s REST API today would be far, far in excess of just rewriting it in a modern REST API framework. 90% of the problems you had to solve yourself bad then are gone, done, buried in extensively tested framework code.
A smaller number of people devise the required concoction of services, they spec each one broadly, define their integrations/interfaces/contracts, and then they can be farmed out and divided up for a team or multiple teams of devs. Each component is a dozen days maybe. Individual "projects" at the application "leaf" layer are just not that valuable individually.
Consider if you had an MCU project for a desktop product with a few interfaces and a colour display.
You "could" create one single firmware and try and get the core devs to work well with the UI devs, but you are likely to run up some bills on communications between the teams and code merge artifacts, code clobbers etc. etc.
You could split the development entirely and give each an MCU of their own. Assuming the bean counters were asleep or that MCU devs cost as much as enterprise devs, I don't know. Anyway, in this case, you would define the interface of communications and contracts between the core MCU and the UI MCU. Now each is free to develop along their own path and as long as they never break that contract everything works. The communications and contention points come down to a much more manageable scope of the integration specification. In enterprise the economics means the later is nearly always preferred.
If you go too far and divide things up too far, it's just a sliding scale of "overhead" versus "manageability". At some point you end up in "public toilet loo roll" territory as I call it. You aren't actually solving any complexity anymore, you are just aimlessly smeering it around in circles.