| General > General Technical Chat |
| Another deadly 737 Max control bug just found! |
| << < (16/37) > >> |
| windsmurf:
American Airlines and United Airlines extend 737 Max grounding through at least early November https://www.latimes.com/business/la-fi-boeing-max-american-united-grounding-20190714-story.html |
| floobydust:
That's 8,000 flights for United Airlines $185M revenue. I say the 737 max won't be flying until well into 2020. I wonder if the money and lives lost will ever be a motivator to do it right, or will it be just investor's cash being burned for a little "hiccup". Instead of parking lots full, maybe Boeing could turn the planes into condos? Just park under the wings. |
| aix:
Looks like there's a rebranding exercise going on: https://www.bbc.co.uk/news/business-48995509 |
| Gyro:
Not sure if it's a knock-on but Ryanair are now saying that they will have to cut flights and possibly bases next year as a result of the delay... https://www.bbc.co.uk/news/business-49000796 It would be amazing if they were actually trying to get away with rebranding the Max. Ryanair have a pretty poor reputation. |
| splin:
--- Quote from: David Hess on July 10, 2019, 03:49:44 am ---And why machines for which the state cannot be documented due to things like heap allocation should not be used in safety critical applications. This also makes processor features which contain unknown state like caches, speculative execution, and multi-threading less desirable. --- End quote --- Except that pretty much any non-trivial system will use heap allocation - but it will be called something else, typically a buffer pool or the like. Try, for example, implementing a comms protocol without one. These pools will be safer than a global heap because (at least): a) They typically will use fixed size allocations (or a limited number of fixed sizes from different pools) so should be free from heap fragmentation which is one of the bigger problems with heaps. b) They will also be shared between a limited subset of the whole application. Typically this managed memory will be passed between tasks/processes/subsystems - eg. between different layers in a comms stack, including device drivers, with all the attendant risks due to the distance and time seperating the allocator of the memory, the users and the consumer that has to de-allocate the memory. By distance, I mean between developers, who may be in different teams, requiring coordination/documentation and suitable development tools to try to ensure that problems such as memory leaks, accessing free'd off memory etc. are minimized (I'd say eliminated if that were possible). You can improve things by copying buffers between tasks/subsystems if you can afford the performance and memory cost - but how many systems have the luxury? If you have finite resources someone has to decide how to allocate it. More stack or more buffer space? In the case of Toyota, they didn't allocate enough for stack space but had they increased the stack allocation something else would have less memory available - ok if you can determine in advance the worst case memory usage but there are many times when static analysis tools can't be used and it comes down to the skill of the developers to work out the worst cases. For safety critical applications you obviously must be able to guarantee the behaviour of the critical parts of the system and isolate them from less trusted subsystems. The reality is that tradeoffs between cost, development time/effort (re-certifying a 737 MAX), functionality and safety are always being made - there rarely absolutes - if ever. No point in a seven-nines reliability requirement if it costs ten trillion dollars. Or a car ECU which costs $1500 and can't do MP3. An aeroplane with enough redundancy to be guaranteed never to fail is probably too heavy to fly. Software that takes longer to develop than the life cycle of the product it's used on is pointless. |
| Navigation |
| Message Index |
| Next page |
| Previous page |