Fine, I realise your typical wired broadband runs a 1:20~1:50 contention ratio and probably average user usage is in single digit MB (less than 10). However, isn't that what we call a pretty bad service? Whenever I had typical suburban(or small town) DSL in the past I wasn't happy with it (in UK) until I got Virgin cable service. That run really great. This was in the days of pre-HD YouTube and Netflix was a new thing. I can only imagine how bad your typical 1:50 (or as you say 1:64) contented DSL runs now.
With these (the typical) numbers, users hitting congestion will be very rare, at least at this part of the network. The vast majority of the time you'll be able to reach your contracted rate because there will be plenty of idle capacity. Consider GPON, where up to 128 users can share 2.4Gbps of bandwidth. A typical provider might sell 100mbps or 200mbps service on such a technology (or even 1Gbps, but this is riskier) with a 1:64 split. Let's say these customers baseline at 10mbps each (high), consuming 640mbps, leaving 1.8Gbps for bursts. Since each customer can only burst to 200mbps, it means 10 of only 64 customers will all need to demand their full bandwidth at the same time for there to be any problem. This is statistically very unlikely.
DSL doesn't even have oversubscription in the last mile, your bandwidth to the DSLAM is dedicated. Of course the DSLAM has 'limited' backhaul capacity, but with modern FTTC deployments, oversubscription will likely be even lower than with PON because the range of the DSLAM is so short it doesn't serve many subscribers and the rates are lower.
If you're experiencing performance problems, especially at peak times, it's much more likely to be on your ISP's peering/transit links than in the last mile, these days. With the exception of Cable, where node splits are difficult and very expensive, so providers very often oversubscribe a lot more than the 1:32-64 typical of PON, so last mile congestion is a lot more common there.
Regarding Starlink the biggest thing I'm curious about is how they accomplish the routing and ground station uplinks. It seems they talk a lot about intersatellite laser links, but most of traffic will go from users to terrestrial Internet, not between users. The satellites that happen to pass over ground stations will have to handle a lot of backbone traffic. Then that traffic will have to be handed over to the other satellite when the first one goes out of range etc. We all know how shitty LTE is when one is on a fast train and cell handovers happen very frequently. This is the same problem, but with a lot more difficulty.
My understanding is that the FSO links are to reduce the number of ground stations required / increase the coverage area of the ground stations, not increase capacity, enabling them to cover areas with few subscribers or poor backhaul infrastructure where it wouldn't be economical to build a ground station. Currently, the satellite must talk to the subscriber and a ground station at the same time, so both must be within view, so each ground station has a limited coverage area. With the FSO system, the ground station can be further away. Presumably if bandwidth increased to the point where this started to create congestion, they would build more ground stations in that area.
Because Starlink is doing CGNAT, I'm pretty sure it's not part of the design, or likely even possible, for customer-customer traffic to use the FSO links directly. That traffic would have to hit a ground station and go back up, I believe.