Your reasoning is detached from the outcome.
That’s a rather lofty accusation.
I worked in the software industry for years, and at a usability agency. I have relevant, real-world experience with this, and am not the deluded simpleton you essentially accuse me of being.
Yes crash reports and usability reports are good data sources.
Do they benefit the user? That depends on the sausage factory in the middle of the process.
Sure, that’s fair.
I have never seen an outcome that is user beneficial from a usability study.
Hold on, we aren’t talking about usability studies. We are talking about
usage data, which is used to inform subsequent usability design.
I literally gave a real-world example: the post-pasting popup menu in Microsoft software (Office, etc). Usage telemetry had shown that the “paste” command is very frequently followed by “undo”, because the result was not as intended. Then people would either use a Paste Special command, or paste it normally and follow it by manual reformatting. So they added the little popup that lets you change the pasted formatting in situ. I think this is a fantastic feature, and well-implemented: it makes it easy to recover from an unexpected result, yet doesn’t force any change to one’s workflow at all: you can also simply ignore it and fix the problem in the old ways.
I posit that they are run by people who have no idea what they are doing.
Every industry and specialty has people who are incompetent and people who are competent. You can’t just dismiss
all usability research as “run by people with no idea what they’re doing”.
As for the other point, my day job for the last couple of years has been running the reliability engineering team for a very large fintech. If you think that a crash dump results in a viable outcome for end users even 5% of the time then you are naive. Most of the time it is just noise. We get thousands of them an hour. And that is considered normal. Even if we do perform a causal analysis on a statistically common one, finding an engineer who can actually understand or solve the problem in a complex distributed system is an uphill battle as well.
The ratio depends entirely on the product, of course. At the small software company I worked at, where the software could generate a crash report as a precomposed email (user still had to actively send it), the trace went straight to the dev team, which knew exactly what it meant and could take action if necessary.
I don’t doubt for a second that in complex, larger systems the ratio of useful reports is smaller. But if you ask me, even if just 5% result in a bug being fixed, that is a good thing. I fail to see how it’s better than nothing.
The general theme in the thread above is there aren't a lot of people who know what they are doing. They are all making appropriate looking dances though and people who don't know what they are doing look at those and think they might know what they are doing. It's not turtles, but incompetence from the top to the bottom.
And that's why we shouldn't trust, not because the idea is bad, but the competence is bad.
I don’t disagree in principle with that statement, but maybe I’m just not quite as jaded as you.