Author Topic: Machine Learning Algorithms  (Read 25592 times)

0 Members and 1 Guest are viewing this topic.

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14640
  • Country: fr
Re: Machine Learning Algorithms
« Reply #75 on: November 18, 2021, 06:24:36 pm »
From your posting I get the idea that judicial sentencing and medical diagnostic work could also be done using fuzzy logic..if it was in style.

https://www.researchgate.net/publication/308823268_Medical_diagnosis_system_using_fuzzy_logic_toolbox
https://pubmed.ncbi.nlm.nih.gov/29852957/
 
The following users thanked this post: SuzyC

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #76 on: November 18, 2021, 06:41:31 pm »
What would be the minimal hardware required to obtain the same results.

What is the minimal components required to implement a NN or ML to solve the example problems?

Minimum hardware depends not on the algorithm, but on the amount of data that you need to push through it.  If you need an answer in a handful of milliseconds to whether a given 4K image contains a stop sign, then you need something pretty beefy.  (or do this: https://xkcd.com/1897/)  If you're okay to wait a few seconds to select from 10 available commands based on about 3 seconds of 8-bit audio at 8kHz sample rate, then you could probably get away with an 8-bit microcontroller (with 3s * 8kHz samples * 1B/sample = 24kB of RAM, plus processing headroom, which eliminates most of them but not all), using only that one chip to capture the analog signal and then do all the processing on it.

Referring to the example you  posted, what part of "real" fuzzy logic was neglected, not used in control implementations?

It was a multi-dimensional lookup table.  Nothing more.  Fuzzy Logic applies boolean logic to non-boolean inputs, using a modified form of boolean expressions like AND, OR, NOT, etc., so that the human-designed rules still make sense in the boolean way of thinking.  For example, "IF [[tank_level IS midrange] AND [temperature IS warm]] THEN stir"  But that's not what this was.  The thought process for the version that I saw was entirely analog, except for the sparse sampling, and potentially a threshold comparison at the end to control an on/off device; it only happened to use a digital system to process it.
« Last Edit: November 18, 2021, 06:47:13 pm by AaronD »
 
The following users thanked this post: SuzyC

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4087
  • Country: nz
Re: Machine Learning Algorithms
« Reply #77 on: November 18, 2021, 10:46:11 pm »
Also lack of context understanding.

I like to refer to an example where a supposedly state-of-art image recognition "AI" recognizes objects on a street, draw the bounding box in real time and puts the label next to it. The typical demo.

But then, the rectangular thing attached on the outside of a house at the end of the driveway, which we humans call "garage door", is misidentified as "flat screen TV", and indeed, if you just cropped that part of an image, a human could make the same mistake - it's just a rectangle with little or no details. What makes it a garage door, is the context around it. You don't buy a 300-inch flat screen TV, and you don't mount it outside your house, on ground level, at the end of your driveway. This is all obvious to a human.

Context is tricky.

https://twitter.com/FSD_in_6m/status/1400207129479352323

 
The following users thanked this post: Siwastaja

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8238
  • Country: fi
Re: Machine Learning Algorithms
« Reply #78 on: November 19, 2021, 08:52:01 am »
And THAT one is the example of a situation where you can't get the required 10000 pieces of training data to find the right correlations. This is less than once-in-a-lifetime occurrence. Even if you "learn in the cloud", i.e., combine all the data worldwide to be able to include those few dozen cases where a truck full of traffic lights drives in front of you, you are not going to get the NN to learn what is the correct way of reacting to this situation.

For a human, this is obvious. Because human learning is not based on making simple correlation coefficients. It has to work another way because a single human can't access vast amount of data for learning. A human kid sees a picture of a badly drawn cat in a book and then recognizes the cat, drawn or real, in very different scenarios. A NN requires a 1000-page book full of pictures: "this is a cat. This is also a cat. This isn't a cat. This isn't either, but this is."

The learning mechanisms are clearly pretty bad. Huge amount of data can be used to compensate, but that only works when huge amount of data is available. This misses all the corner cases, by the very definition of corner case!

This is also why NNs are great for classification tasks where making mistakes in corner cases doesn't matter. For example, waste recycling detection mentioned earlier, because waste stream processes are robust against small amounts of wrong types.
« Last Edit: November 19, 2021, 08:54:07 am by Siwastaja »
 

Online tszaboo

  • Super Contributor
  • ***
  • Posts: 7478
  • Country: nl
  • Current job: ATEX product design
Re: Machine Learning Algorithms
« Reply #79 on: November 19, 2021, 10:28:15 am »
To answer the original question: Pytorch, scipy, opencv. Pytorch for custom ML stuff, anything written more than 1-2 year ago is obsolete. The entire ML became so much easier with it, and the coding part is just very straightforward. I had ML algorithm working and trained in about a day last time I tried, while with Keras, Tensorflow and others is was mayor PITA to get it going.
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3158
  • Country: ca
Re: Machine Learning Algorithms
« Reply #80 on: November 19, 2021, 04:15:32 pm »
A human kid sees a picture of a badly drawn cat in a book and then recognizes the cat, drawn or real, in very different scenarios.

Or a kid draws a boa and everyone else recognizes it as a hat.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #81 on: November 19, 2021, 04:33:28 pm »
And THAT one is the example of a situation where you can't get the required 10000 pieces of training data to find the right correlations. This is less than once-in-a-lifetime occurrence. Even if you "learn in the cloud", i.e., combine all the data worldwide to be able to include those few dozen cases where a truck full of traffic lights drives in front of you, you are not going to get the NN to learn what is the correct way of reacting to this situation.

For a human, this is obvious. Because human learning is not based on making simple correlation coefficients. It has to work another way because a single human can't access vast amount of data for learning. A human kid sees a picture of a badly drawn cat in a book and then recognizes the cat, drawn or real, in very different scenarios. A NN requires a 1000-page book full of pictures: "this is a cat. This is also a cat. This isn't a cat. This isn't either, but this is."

The learning mechanisms are clearly pretty bad. Huge amount of data can be used to compensate, but that only works when huge amount of data is available. This misses all the corner cases, by the very definition of corner case!

This is also why NNs are great for classification tasks where making mistakes in corner cases doesn't matter. For example, waste recycling detection mentioned earlier, because waste stream processes are robust against small amounts of wrong types.

I think the talk about fads in ML is more significant than meets the eye.  The entire field is so new, promising, and exciting that we can't slow down and do it right.  Old techniques that may have failed because of the processing power at the time, are not revisited, nor do we spend the time anyway to actually train it well.  (remember how long humans take, starting from blank at birth...)  The amazingly long training times and amount of data required, clashes with our excitement, and so we move on to something else.

It simply takes time, with lots of relevant experiences, to build what we call "common sense".  Humans that don't have those experiences, don't have the sense either; and a correctly-built learning machine that does, can.  But like I said in the previous paragraph, there's no substitute for the long way around, and we're not going to take the long way around as long as the funding is based on excitement.



In the case of traffic lights on a truck, you might think of adding a bunch of rules like, "Traffic lights are only valid when at least one of them is lit up, and when they're not moving a significant distance, but swinging in the breeze while on is still valid, etc.", but you very quickly get into an unworkable mountain of arbitrary rules that is practically impossible for even humans to learn.  So why do we expect a machine to do it?

(I'm related to someone that does that.  He says he doesn't understand people, so he follows rules instead, built and refined over 50+ years of essentially trial-and-error.  He still makes frequent similar mistakes like a machine does, complete with high confidence in a terrible answer.  He can do things on his own just fine, even got a Ph.D. in a highly technical field and made his career there, but personal interactions are still painful, and he doesn't seem to have a true "engineering mind" that works to at least some degree across all disciplines.  It's very much based on direct experience alone.  So if even a human can't make it all work with direct rules alone, there must be something else that a machine must also include in order to get it right.)

I think the case of traffic lights on a truck, and many others, would greatly benefit from a "sense of intent", or, "What is the intended purpose of this scenario?"  Is this traffic light intended to control traffic?  Or is it simply being transported?  How would it actually work to have a valid traffic light on a moving truck, and do the required rules for that make sense?  (automatic proof by contradiction)  Lots of common sense involved with that, which comes with the requirements above, but CS is not the whole story either.

Build a machine that has a section for others' intent, include that as part of the learning, and use it to influence decisions; and see what it comes up with...
 
The following users thanked this post: Marco, SiliconWizard

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3158
  • Country: ca
Re: Machine Learning Algorithms
« Reply #82 on: November 19, 2021, 08:11:17 pm »
In the case of traffic lights on a truck, you might think of adding a bunch of rules like ...

I don't think imposing rules on a machine gives it any intelligence. Rather you use your own intelligence to design the set of rules which are then followed by the machine.

For example, I have a chip programming machine. It has a camera and needs to detect if the chip is present, and if the chip is there it needs to detect its position. I looked at various pictures taken with the camera. Then I designed a small set of rules. Then I wrote the program to calculate the rules. The program does this very quickly and never makes any mistakes. It is silly to believe that it has any intelligence. The intelligence is all mine.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #83 on: November 19, 2021, 08:41:49 pm »
In the case of traffic lights on a truck, you might think of adding a bunch of rules like ...

I don't think imposing rules on a machine gives it any intelligence. Rather you use your own intelligence to design the set of rules which are then followed by the machine.

For example, I have a chip programming machine. It has a camera and needs to detect if the chip is present, and if the chip is there it needs to detect its position. I looked at various pictures taken with the camera. Then I designed a small set of rules. Then I wrote the program to calculate the rules. The program does this very quickly and never makes any mistakes. It is silly to believe that it has any intelligence. The intelligence is all mine.

What IS "intelligence" anyway?  That's a surprisingly hard question to answer, without introducing a bunch of unnecessary restrictions by definition.

And the part that you quoted was a strawman argument, used to make the point that followed it.  :)
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: Machine Learning Algorithms
« Reply #84 on: November 19, 2021, 09:08:31 pm »
Governments want this so badly, (it has the potential to save business so much money) that they will give companies immunity from liability when they screw up, just to give them an edge in using it sooner. After all, they are where the money comes from, right?

And one problem as I said and tggzzz pointed out is that it's impossible to get a formal proof that a given trained NN will behave in the way we expect it to. We can only test, test, test until we get a statistically signficant result that meets our requirements, and it's never 100%. Thing is, what happens for the few % cases in which it fails is unknown (and can be a big risk in any critical application), and *why* it performs as expected is actually also unknown.

Our inability to prove correctness of trained NNs is a major issue, that bites, and will bite us for years to come. Worse yet, analyzing why a trained NN fails for some inputs is also almost impossible. Thus using them in any safety-critical application is a serious problem.

It is even worse than that :( You have no idea of how close you are to the boundary where they stop working. There are many examples of making trivial (even invisible) changes to pictures, and the classifier completely misclassifies the image.

Yes, exactly. Consider that companies also love to have an escape from blame that basically is always available to them, which is what is coming.
"What the large print giveth, the small print taketh away."
 

Online NorthGuy

  • Super Contributor
  • ***
  • Posts: 3158
  • Country: ca
Re: Machine Learning Algorithms
« Reply #85 on: November 20, 2021, 12:48:55 am »
What IS "intelligence" anyway?

"the ability to acquire and apply knowledge and skills" the dictionary says.
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8238
  • Country: fi
Re: Machine Learning Algorithms
« Reply #86 on: November 20, 2021, 04:59:42 pm »
But if the machine replicates NorthGuy's intelligence, then you could say the machine itself is intelligent?

And, if NorthGuy did the job well, then what's the problem?

In this regard, I believe well-programmed fixed algorithms, or "expert systems", are a much better idea than forcing general-purpose NN everywhere and hoping throwing petabytes of data at it somehow automagically solves all the problems.

Also, I believe giving the machine super-human capabilities when you can, instead of trying to replicate human, weaknesses included. For example, regarding self-driving automobiles, you can measure distance using laser beams, this is obvious advantage over human vision. Yet Tesla says they don't want that when they can route standard human-like camera vision into neural network with the complexity of ant's brain and hope it makes some sense.
« Last Edit: November 20, 2021, 05:02:52 pm by Siwastaja »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14640
  • Country: fr
Re: Machine Learning Algorithms
« Reply #87 on: November 20, 2021, 05:45:32 pm »
Yes, exactly. Consider that companies also love to have an escape from blame that basically is always available to them, which is what is coming.

Yep, this is one point I also mentioned and that I think is key here. This artifical "intelligence" has the extraordinary "power" of helping companies (and ultimately, governments) get rid of any kind of liability. How could they not force the movement at all costs?

The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9090
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Machine Learning Algorithms
« Reply #88 on: November 21, 2021, 03:40:46 pm »
The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19787
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #89 on: November 21, 2021, 04:36:40 pm »
The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)

In theory the pilots always hare authority and responsibility in law.

In practice it is common for the entire flight deck crew to be asleep on long haul flights. They might not notice the autopilot has made a "poor" decision.
In practice sometimes the autopilot overrides the pilots: witness the 737 MAX accidents and AF447, where neither pilot realised P2's control inputs were being ignored.

Now those are highly trained people operating in a relatively well understood and constrained environment. Many of the new ML systems will have untrained operators in complex environments. What could possibly go wrong?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14640
  • Country: fr
Re: Machine Learning Algorithms
« Reply #90 on: November 21, 2021, 06:18:53 pm »
The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)

Looks like you missed the point - at least you haven't given this a lot of thought.

A few things:
- I pointed out the patent inconsistency of CLAIMING that AI systems are much safer than any human could be, while ultimately expecting the human to make up for any mishap of the automated system. That is just twisted.
- I would have a lot fewer concerns overall if companies promoting and selling stuff with AI systems were ENTIRELY liable in case of a mishap. That'd be a game changer for sure.
- Pilots in aircrafts are not a very good parallel - ultimately, the "pilot in command" is responsible for anything that happens in the aircraft, not just any pilot (copilots are not). This has strict legal implications and is quite different from the case of an individual driver in a car.
- Conventional autopilots are predictable (at least for the most part ;D ). Sometimes things can go wrong, due for instance to sensor failure not well handled in software, but most often, when a sensor fails, the autopilot will disengage itself first thing. The exceptions mentioned by tggzzz are actually not "autopilot" failures per se, but extra flight systems that are supposed to keep the plane safe. Not that it fundamentally makes a big difference, just that those systems are "sneakier" than autopilots which can just be disabled upon the press of a button. Possibly a parallel in a car would be, for instance, ABS failure, rather than a failure of those AI-based "autopilots".
- Even so, there already are cases with existing systems, which are not AI-based (like the MCAS debacle). But as a few of us are trying to explain in this thread, the difference is that it was in the end relatively straightforward to understand where the problem came from, what happened and how to fix it. Because the sytems in question were analyzable. And Boeing got the consequences. Imagine the same issue with Boeing's MCAS, but this time the MCAS was entirely AI-based, and no one could for sure pinpoint the issue after the accidents.

"Interestingly", Elon Musk is perfectly aware of those issues with AI and has been saying things about it that are quite similar to what tggzzz, I, and a few others are saying here. His main point for actively *using* AI in his products is to become proactive rather than being passive and letting others do it anyway. He's been a proponent of *regulating* AI in a strict way. Problem though, nothing much is really happening yet in that area, and he's still actively promoting AI, while - at least as far as I know - having not done much for the regulation part (like actively working with politics) apart from a few talks. I get his point of being proactive rather than letting others do it anyway, but as it is, whatever his concerns are, it's not helping much and doesn't look liike much more than just cute marketing talk to make him look like the "good guy".
« Last Edit: November 21, 2021, 06:20:40 pm by SiliconWizard »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4087
  • Country: nz
Re: Machine Learning Algorithms
« Reply #91 on: November 21, 2021, 11:52:32 pm »
In practice it is common for the entire flight deck crew to be asleep on long haul flights.

That *has* happened but it should never happen. It's certainly not COMMON.

Flights long enough for this to be any sort of problem have multiple crews onboard and they go to actual bunks to sleep.
 
The following users thanked this post: NiHaoMike

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14640
  • Country: fr
Re: Machine Learning Algorithms
« Reply #92 on: November 21, 2021, 11:58:35 pm »
In practice it is common for the entire flight deck crew to be asleep on long haul flights.

That *has* happened but it should never happen. It's certainly not COMMON.

Flights long enough for this to be any sort of problem have multiple crews onboard and they go to actual bunks to sleep.

Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8238
  • Country: fi
Re: Machine Learning Algorithms
« Reply #93 on: November 22, 2021, 06:34:14 pm »
Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!

IMHO, #1 root cause in that one as well is still lack of basic skills and training of those basic skills. It's again the classic "oh, we are falling from the sky, I have no idea what to do, maybe pull the nose up so we go higher?!?" Yes, everything else, like sensor failures and fatigue, are contributing factors and adds to the confusion but understanding basics solidly is the key here. This is like forgetting Ohm's law, trying to figure out if increase in resistor value increases or decreases current for a minute and simply not being able to make it, but instead of such basics, they know how configure a project in CubeMX, equivalent to dealing with all those flight deck computers to get the plane airborne and to the destination, without an idea what's actually happening.

Solution? Similarly to doing enough PIC/AVR projects bare metal is a good starting point, learning to fly on a small aircraft with no autopilots whatsoever would be a good starting point to get enough grasp of basics such as what stall means and how to recover from it. Seemingly this isn't obvious at all to commercial airline pilots. It should be.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14640
  • Country: fr
Re: Machine Learning Algorithms
« Reply #94 on: November 22, 2021, 07:06:07 pm »
Yes, and one "output" of this accident is that they drastically improved the training regarding handling stalls on airliners.

Thing is, related to this whole discussion: the more AI we're gonna use, and the less trained people will be. Training has a cost. Ultimately, one of the whole points of automation is to lower COSTS. The part in automation that has been used solely to improve safety has pretty much already been there for a while. The next step is not to provide tools to help people and get better safety: it's to get rid of people altogether. They are absolutely all claiming - Musk included, even though he pretends to be wary of AI - that the future of transportation is fully autonomous. Nothing else.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #95 on: November 22, 2021, 07:26:20 pm »
Yes, and one "output" of this accident is that they drastically improved the training regarding handling stalls on airliners.

Thing is, related to this whole discussion: the more AI we're gonna use, and the less trained people will be. Training has a cost. Ultimately, one of the whole points of automation is to lower COSTS. The part in automation that has been used solely to improve safety has pretty much already been there for a while. The next step is not to provide tools to help people and get better safety: it's to get rid of people altogether. They are absolutely all claiming - Musk included, even though he pretends to be wary of AI - that the future of transportation is fully autonomous. Nothing else.

Exactly.  We've improved human safety to the point of reaching its limit.  Any improvement now is to remove humans from the equation.  Once THAT happens, THEN we can realize the exciting benefits.

Someone in this thread mentioned "constrained environments".  Removing humans will go a long way in enforcing that constraint, thus making automation much safer.  If we're just passengers, not controllers, then a car doesn't have to worry about the idiot that's about to T-bone it.

I've seen a comment elsewhere that I agree with as well, that says that we won't have flying cars until we first have fully autonomous cars.  If the general public is inherently this bad at driving in 2D, then we certainly can't allow them to drive in 3D!
 

Online Siwastaja

  • Super Contributor
  • ***
  • Posts: 8238
  • Country: fi
Re: Machine Learning Algorithms
« Reply #96 on: November 22, 2021, 07:47:05 pm »
There is something to be gained if automation is to be applied carefully:

Right now a lot of piloting tasks are to manage stone-age automation. Given certain amount of money and time, this already limits the training of important basics, have been this way for three decades already.

By using more advanced, modern automation, stupid "program the computer" tasks can be reduced to almost zero, freeing resources into training of basics, and during flights, freeing the attention from the "stupid computer" into what's actually important, namely air speed, altitude and artificial horizon. Many accidents have been caused by lack of focus on these due to fighting with automation; sometimes "fight" means just standard procedures being a battle.

But, such good automation does not need neural networks or similar "AI" things. It requires classic understanding of simple computer algorithms, simulations and testbenching them, and UI/UX specialists.
 
The following users thanked this post: AaronD

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19787
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #97 on: November 22, 2021, 08:06:55 pm »
We've improved human safety to the point of reaching its limit.  Any improvement now is to remove humans from the equation. 

Possibly. Rubbish - unless you can prove that contention - and we need stronger proof than is normal in the ML fraternity.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14640
  • Country: fr
Re: Machine Learning Algorithms
« Reply #98 on: November 22, 2021, 08:21:39 pm »
We've improved human safety to the point of reaching its limit.  Any improvement now is to remove humans from the equation. 

Possibly. Rubbish - unless you can prove that contention - and we need stronger proof than is normal in the ML fraternity.

Not just that, but it's interesting to see claims of promoting AI to improve "human safety", while I claim the main reason by far is to lower costs.
Generally speaking, one must understand what "getting humans out of the equation" implies.
We can indeed get out of the equation for good, and there'll be nothing much to talk about anymore.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4087
  • Country: nz
Re: Machine Learning Algorithms
« Reply #99 on: November 22, 2021, 08:44:45 pm »
Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!

IMHO, #1 root cause in that one as well is still lack of basic skills and training of those basic skills. It's again the classic "oh, we are falling from the sky, I have no idea what to do, maybe pull the nose up so we go higher?!?"

As a pilot myself I agree completely. I find it absolutely incredible that an international airline pilot can be so lacking in basic flying skills. Have they turned completely into button pushers?

Quote
Solution? Similarly to doing enough PIC/AVR projects bare metal is a good starting point, learning to fly on a small aircraft with no autopilots whatsoever would be a good starting point to get enough grasp of basics such as what stall means and how to recover from it. Seemingly this isn't obvious at all to commercial airline pilots. It should be.

I think training in a standard small aircraft may be inadequate. In general they do very little stall training and these days often no spin training at all. When they actually do stall training, they are trained to initiate recovery in response to the stall warning horn sounding -- which generally happens around 5 knots faster than the actual stall, when the aircraft is actually still flying normally.

Training in gliders (sailplanes in the US) is much more stall intensive. There is no warning horn -- you have to learn to recognise the aerodynamic symptoms of stall yourself. And while Cessna pilots spend almost all their time boring holes in the sky and twice or more the stall speed, glider pilots spend a lot of time flying in maximum performance circles in thermals at just above the stall speed (or accelerated stall speed due to the steep bank angle and higher G load often used). As thermals are gusty, you often get actual stalls and it becomes absolutely ingrained what that feels like and you automatically make the required recovery action (easing the stick forward until it stops).

Here's my most popular gliding video, with currently 84960 views:


 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf