Author Topic: Machine Learning Algorithms  (Read 25165 times)

0 Members and 1 Guest are viewing this topic.

Offline BlogRahulTopic starter

  • Regular Contributor
  • *
  • Posts: 75
  • Country: in
Machine Learning Algorithms
« on: November 01, 2021, 06:49:24 am »
I don't understand how do we choose  algorithms in Machine Learning. For Example I need  to make model that identify which flower is on the plan t. google search show that we will need CNN algorithm but  I don't understand why the CNN is only useful for this project
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11260
  • Country: us
    • Personal site
Re: Machine Learning Algorithms
« Reply #1 on: November 01, 2021, 07:12:02 am »
There is no strict logical explanation. They tried a bunch of stuff and the current iteration of the best suitable algorithms for image recognition are CNNs. That is until something else is invented.

Why CNNs are specifically useful for image recognition and classification is pretty easy to see - they iteratively reduce the amount of information down to a small set of numbers.

Alex
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #2 on: November 01, 2021, 08:53:00 am »
You don't. The "AI" does some magic, you cross your fingers and hope.

"7 Revealing Ways AIs Fail. Neural networks can be disastrously brittle, forgetful, and surprisingly bad at math "
https://spectrum.ieee.org/ai-failures
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6202
  • Country: ro
Re: Machine Learning Algorithms
« Reply #3 on: November 01, 2021, 10:55:22 am »
Different types of NN fits different tasks.  See if these short videos helps
https://www.youtube.com/c/DeepLearningTV/videos
(the title looks like it's about deep learning only, but I remember at some point the series has an overview about which type is good where, I hope I'm not confusing that with some other channel, but even so it's very short and informative as an overview).

Another channel I remember is the Brandon Rohrer's classes/playlists, e.g.:


Also Barry Van Veen has some classes/playlists about AI/ML:
https://www.youtube.com/user/allsignalprocessing/playlists

If these does not link to your exact question, you have to browse through their playlists page and video page of their youtube channels.  See for yourself is any of these suits you.
« Last Edit: November 01, 2021, 11:01:30 am by RoGeorge »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #4 on: November 01, 2021, 06:01:07 pm »
ataradov and tggzzz hit the nail.

Current AI with neural networks is essentially based on experiments. The theory behind NNs is relatively simple - at the computation level - as to their ability to "learn", but that's about it. It's almost impossible to predict - and even less so to *prove* how they will perform for a given application.

So people just experiment with some refinements, mainly on the backpropagation steps and learning datasets, AFAIK, compare results and eventually pick the approach which gives the best results overall. To answer your question, people test, test and test more with lots of experiments and data sets until they get good results, then the approach is selected and becomes the de facto one for a given application until the next, better one comes up.

One of the reasons NNs have become useful (while they are certainly nothing new) is that we now have enormous memory and computing power at our disposal for cheap.

And one problem as I said and tggzzz pointed out is that it's impossible to get a formal proof that a given trained NN will behave in the way we expect it to. We can only test, test, test until we get a statistically signficant result that meets our requirements, and it's never 100%. Thing is, what happens for the few % cases in which it fails is unknown (and can be a big risk in any critical application), and *why* it performs as expected is actually also unknown.

Our inability to prove correctness of trained NNs is a major issue, that bites, and will bite us for years to come. Worse yet, analyzing why a trained NN fails for some inputs is also almost impossible. Thus using them in any safety-critical application is a serious problem.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #5 on: November 01, 2021, 07:12:35 pm »
Google for 'numberphile cnn' (no quotes) and you will turn up a couple of pretty good videos.

Image classification using Python:
https://www.analyticsvidhya.com/blog/2021/06/image-classification-using-convolutional-neural-network-with-python/

MATLAB with the Deep Learning toolbox does a really nice job.  It will also use CUDA units for the math if you have an NVIDIA graphics card and the Parallel Programming toolbox.  Always another toolbox...

There are a lot of books on the topic.  I have a bunch of them but it's still slow going.
 
The following users thanked this post: cdev

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #6 on: November 01, 2021, 08:56:33 pm »
And one problem as I said and tggzzz pointed out is that it's impossible to get a formal proof that a given trained NN will behave in the way we expect it to. We can only test, test, test until we get a statistically signficant result that meets our requirements, and it's never 100%. Thing is, what happens for the few % cases in which it fails is unknown (and can be a big risk in any critical application), and *why* it performs as expected is actually also unknown.

Our inability to prove correctness of trained NNs is a major issue, that bites, and will bite us for years to come. Worse yet, analyzing why a trained NN fails for some inputs is also almost impossible. Thus using them in any safety-critical application is a serious problem.

It is even worse than that :( You have no idea of how close you are to the boundary where they stop working. There are many examples of making trivial (even invisible) changes to pictures, and the classifier completely misclassifies the image.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6202
  • Country: ro
Re: Machine Learning Algorithms
« Reply #7 on: November 01, 2021, 11:23:35 pm »
Same things can be said about humans.  But this wouldn't answer the question from the OP.  There are places where ML fits and other places where it doesn't.

My advice is to follow the classes then go and experiment, and it will all start to make sense.

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #8 on: November 02, 2021, 09:03:37 am »
Same things can be said about humans.  But this wouldn't answer the question from the OP.  There are places where ML fits and other places where it doesn't.

Key differences: the human can
  • explain why they made a decision. That's an unsolved problem for current ML programs
  • can get them to do differently next time. You can't train an ML program that easily, since you don't know
    • why it made a decision
    • what extra examples are necessary
    • how those extra examples would have changes previous and future decisions
    so ML is being and will be used inappropriately as "the computer says so" magic

Those problems are critical, and peoples lives have been and are being destroyed by them. There are many examples reported in comp.risks. I'll just note that
  • some US states are using ML to define whether unconvicted crime suspects should be held in jail, and the sentence of convicts
  • some medical diagnostic systems have been found making decisions on the basis of the font on X-rays
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: SiliconWizard

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #9 on: November 02, 2021, 04:07:27 pm »
I strongly recommend
Deep Learning with Python, Second Edition

https://www.manning.com/books/deep-learning-with-python-second-edition

which not only shows how to apply deep learning using tensorflow, but also explains the ideas behind it. Your question and many others will be answered by this book.
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #10 on: November 13, 2021, 01:17:23 pm »
I don't understand how do we choose  algorithms in Machine Learning. For Example I need  to make model that identify which flower is on the plan t. google search show that we will need CNN algorithm but  I don't understand why the CNN is only useful for this project

Interesting question, radically two different domains.

- Machine Learning (Algorithm) vs CNN (Deep Learning Network)
- Image detection using "Computer Vision algorithm" vs Image detection using "Vision Deep Learning Network"

Machine Learning - high level algorithms for statistical related predictions. 

Examples:

- Density-based techniques (k-nearest neighbor ocal outlier factor, isolation forests.
- Subspace, correlation-based and tensor-based outlier detection for high-dimensional data.
- One-class support vector machines.
- Replicator neural networks., autoencoders, variational autoencoders, long short-term memory neural networks.
- Bayesian networks.
- Hidden Markov models.
- Minimum Covariance Determinant.
- Cluster analysis-based outlier detection.
- Deviations from association rules and frequent itemsets.
- Fuzzy logic-based outlier detection.
- Ensemble techniques, using feature bagging.

DNN (Deep Learning Network)

Used to find "complex relationships" in "complex domains" (not limited to) Image/Video/Audio/Speech ect.. DNN has a high degree of accuracy, they do however require large datasets to build models that performs a prediction. Note: we no longer dealing with "algorithms here" but "network configurations".. algorithms has a predicted outcome for a given input,  DNN doesn't, there is no way to reverse engineer how the network came to a conclusion as we can see anything from a few 10, 30.. 100k neurons connections internally to a million or so dealing with very "abstract data points/patterns" modeling how the brain work when we make predictions about sound or image objects. (amazing work this last 10 years and its exponentially getting better as the research progresses)

Examples:

Image Detection Networks:

- YOLO https://en.wikipedia.org/wiki/Object_detection
- ResNets https://towardsdatascience.com/cnn-resnets-a-more-liberal-understanding-a0143a3ddac9
- Inception network https://medium.com/ml-cheat-sheet/deep-dive-into-the-google-inception-network-architecture-960f65272314

Audio Speech Detection Networks:

https://www.analyticsvidhya.com/blog/2018/01/10-audio-processing-projects-applications/

- Audio Classification
- Audio Fingerprinting
- Automatic Music Tagging
- Audio Segmentation
- Audio Source Separation
- Beat Tracking
- Music Recommendation
« Last Edit: November 13, 2021, 01:33:50 pm by diyaudio »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #11 on: November 13, 2021, 07:54:40 pm »
So you want to identify dogs and you have thousands of photos of various breeds.  Unfortunately, all of the photos of one particular breed show the dog on a couch in various poses.  The machine will learn to identify the couch as a dog and completely overlook the dog.  Every couch looks like a dog of a specific breed!

ML is hard!  The matrix math and the gradient descent is easy to understand (just a LOT of partial derivatives) but true knowledge isn't easy.  Teslas still have a tendency to run into emergency vehicles - police cars and fire trucks.  Waymo robotaxis don't do well with traffic cones either.

https://www.theverge.com/2021/9/14/22673497/tesla-nhtsa-autopilot-investigation-data-ford-gm-vw-toyota
https://www.zdnet.com/article/waymo-robotaxis-struggle-to-appropriately-react-when-around-traffic-cones

Getting an identification is easy.  Getting the RIGHT identification is a good deal harder.

« Last Edit: November 13, 2021, 07:57:07 pm by rstofer »
 
The following users thanked this post: cdev

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #12 on: November 13, 2021, 08:23:51 pm »
Yes. As a thought, I find it interesting how "AI" has become so popular while being, in a way, the antithesis of what computer science is usually about - for instance, proving correctness, defining and analysing algorithms, etc.

We should not confuse different levels of concepts here: while the algorithms used for ML and NNs are well understood, and relatively simple, they, by themselves, do not do anything "useful" regarding the task we're interested in - for instance, image recognition. The "networks" they create is what does the interesting part, and so they kind of implement ""hidden" algorithms that we are unable to analyze. Since we can't really analyze them, is this still really science? tggzzz said it's a bit of "magic", and in a way, that's exactly it as far as we are concerned.

Just because we have formalized ways to "induce" this magic doesn't really make it less magic.

Another thought is that it's, at first, all based on trying to mimick neural-based living intelligence. Thing is, we have captured only a part of what it is. Maybe just a small part. It's not just about the amount of neurons and learning data, IMO. Neural structures, AFAIK, in living beings do not just merely form based on inputs. We know there are certain structures that form, almost always in the same way and at the same locations (in the brain for instance), "coding" specific things with a specific level of abstractions. That part of "intelligence" mostly eludes us in AI at the moment, and that's only just scratching the surface.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #13 on: November 13, 2021, 08:40:14 pm »
So you want to identify dogs and you have thousands of photos of various breeds.  Unfortunately, all of the photos of one particular breed show the dog on a couch in various poses.  The machine will learn to identify the couch as a dog and completely overlook the dog.  Every couch looks like a dog of a specific breed!

And then one photo contains a dog on a couch that is 5mm higher than other couches. The machine classifies that as a table, and so identifies the dog as a different breed. But the second photo in that series is from a slightly different angle so the height is misestimated and the original "correct" breed is diagnosed
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #14 on: November 13, 2021, 08:47:05 pm »
Yes. As a thought, I find it interesting how "AI" has become so popular while being, in a way, the antithesis of what computer science is usually about - for instance, proving correctness, defining and analysing algorithms, etc.

We should not confuse different levels of concepts here: while the algorithms used for ML and NNs are well understood, and relatively simple, they, by themselves, do not do anything "useful" regarding the task we're interested in - for instance, image recognition. The "networks" they create is what does the interesting part, and so they kind of implement ""hidden" algorithms that we are unable to analyze. Since we can't really analyze them, is this still really science? tggzzz said it's a bit of "magic", and in a way, that's exactly it as far as we are concerned.

Just because we have formalized ways to "induce" this magic doesn't really make it less magic.

Another thought is that it's, at first, all based on trying to mimick neural-based living intelligence. Thing is, we have captured only a part of what it is. Maybe just a small part. It's not just about the amount of neurons and learning data, IMO. Neural structures, AFAIK, in living beings do not just merely form based on inputs. We know there are certain structures that form, almost always in the same way and at the same locations (in the brain for instance), "coding" specific things with a specific level of abstractions. That part of "intelligence" mostly eludes us in AI at the moment, and that's only just scratching the surface.

oh don't sound so hateful, you telling your age.  ;D   
 
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #15 on: November 13, 2021, 11:53:47 pm »
In this video, Andrew Glassner talks about the dangers of deploying AI and using machines to make decisions about jobs, home purchases, duration of incarceration (that may be in a different video) and so on.

See around 16:00



In a Glassner video I was watching yesterday (don't know which one), he mentioned an AI that was used to determine the likelihood of recidivism and recommend sentencing.  The defense was unhappy with the outcome and want to interrogate the 'expert witness' (the machine).  The manufacturer said "No!" and that's where it died.  Nobody is accountable for the sentencing guidelines and, of course, they tend to be on the long side.  See around 19:30

It's a slippery slope when you can't even see the model much less the internals.
« Last Edit: November 13, 2021, 11:58:33 pm by rstofer »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #16 on: November 14, 2021, 02:12:11 am »
Yep indeed. With the unability to explain or analyze decisions made with AI comes the question of accountability. People will be happy to reap the benefits of AI, but nobody will ever want to be held accountable for any mishap. Yes, a bit like politics.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #17 on: November 14, 2021, 01:40:38 pm »
Also lack of context understanding.

I like to refer to an example where a supposedly state-of-art image recognition "AI" recognizes objects on a street, draw the bounding box in real time and puts the label next to it. The typical demo.

But then, the rectangular thing attached on the outside of a house at the end of the driveway, which we humans call "garage door", is misidentified as "flat screen TV", and indeed, if you just cropped that part of an image, a human could make the same mistake - it's just a rectangle with little or no details. What makes it a garage door, is the context around it. You don't buy a 300-inch flat screen TV, and you don't mount it outside your house, on ground level, at the end of your driveway. This is all obvious to a human.
 
The following users thanked this post: cdev

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #18 on: November 14, 2021, 05:51:01 pm »
But the second photo in that series is from a slightly different angle so the height is misestimated and the original "correct" breed is diagnosed

It's standard in these approaches to augment the original training data by adding copies of it in many different orientations and at many different scalings, to try to overcome issues like this.

Nevertheless, a major problem is having sufficient training data which covers all the bases. A self driving car trained only with data from California is unlikely to do well on the very different roads in Wales, for example.

You also have to be very careful about how many false positives and false negatives are acceptable. Facial recognition systems are clearly a case in point.
 

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #19 on: November 14, 2021, 05:57:09 pm »
With the unability to explain or analyze decisions made with AI ...

The situation isn't as bad as you are making out. For example, if you have a look at Chollet's book I mentioned earlier in this thread, you will see there are ways to include visualizations of the data in intermediate layers in deep learning systems, to show e.g. which areas of an image the system is using to make its classifications.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #20 on: November 14, 2021, 06:40:06 pm »
But the second photo in that series is from a slightly different angle so the height is misestimated and the original "correct" breed is diagnosed

It's standard in these approaches to augment the original training data by adding copies of it in many different orientations and at many different scalings, to try to overcome issues like this.

How many near copies are needed? One? Ten? Hundred? In a finite-sized machine, which other training examples do you discard?

Would it recognise a person with one leg and a crutch as a person?

Quote
Nevertheless, a major problem is having sufficient training data which covers all the bases. A self driving car trained only with data from California is unlikely to do well on the very different roads in Wales, for example.

How can you tell when you have sufficient training data?

If not California, then what would happen in Oregon or British Columbia? Would it be OK in London or Rome? Where are the boundaries?

In a finite-sized machine, which other training examples do you discard?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #21 on: November 14, 2021, 07:05:13 pm »
With the unability to explain or analyze decisions made with AI ...

The situation isn't as bad as you are making out. For example, if you have a look at Chollet's book I mentioned earlier in this thread, you will see there are ways to include visualizations of the data in intermediate layers in deep learning systems, to show e.g. which areas of an image the system is using to make its classifications.

You're right, but that's still very, very far from the point where you can actually fully determine what happens in particular cases and make sense of that.
There is, and sure will be continuing research in that particular area of neural network analysis, because it's obviously a major concern. No doubt things are going to improve.
But analyzing neural networks is inherently a NP-complete problem. Those are a bitch to tackle. The training part itself is already NP-complete.
https://www.sciencedirect.com/science/article/abs/pii/S0893608005800103

Point here is, it's not about how you can partially analyze parts of the neural network - which is certainly interesting per se - but being able for sure to determine how and why a certain network came to a given conclusion for a given set of inputs, in order to 1/ know if we can trust the result - and thus should apply it, and 2/ determine accountability. It doesn't matter much for any non-critical task of course. I don't mind if Google image has occasionally classified a bird as a motorbike. But for any critical one, certainly does.
 

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #22 on: November 14, 2021, 09:47:21 pm »
... being able for sure to determine how and why a certain network came to a given conclusion for a given set of inputs, in order to 1/ know if we can trust the result - and thus should apply it, and 2/ determine accountability.

You are thinking about these systems as if they should be deterministic - like most software to date. Yet human beings don't work like that. You would never get that certainty from a human, and if they are able to justify why they did something, it is usually a post-hoc explanation cooked up as a justification. You might be surprised to see how much experts disagree in interpreting x-ray images  when trying to spot cancer, for example.

Don't get me wrong. Some software should be deterministic - particularly safety critical systems - where we understand the requirements exactly. But in other cases, the answer does not have to be perfect. The system just has to produce the right answer more often than a human expert - and the people using it need to realise its limitations.

Indeed, for dull repetitive tasks (like monitoring a live video stream for the presence of one or more persons), it may not take much to beat human performance, as people get distracted, bored, etc.

Going back to "how much training data do we use?", and also asking "how do we know how well it works?", a key part of the answer is to use separate validation sets from the training data, to measure generalizability to new data.

And as for discarding data for a finite sized machine - the simple answer is that you input the data in batches. You don't try to input it all at once. The issue is collecting enough data for good generalizability, rather than too much data.
« Last Edit: November 14, 2021, 09:51:09 pm by ralphrmartin »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #23 on: November 14, 2021, 10:17:21 pm »
Determinism is not required. There are many useful nondeterministic systems.

Predictability is required.

In order to be predictable, the designer and/or reviewer must understand what is happening and why.

The designers do not - and unless there are fundamental advances - cannot understand why a result appears, nor of duct what the result will be. They can hope, but that isn't sufficient.

For example, if a judge incarcerates/frees you, they are expected to explain why that is the appropriate result. When an ML machine does the same, there is no explanation. Yes, that is already happening in the USA.

Ditto a car running down a pedestrian or driving into a roadside barrier.

Fundamentally the central point about science is being able to male correct predictions. Magic and cargo-cult science aren't constrained by that.
« Last Edit: November 14, 2021, 10:20:44 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 
The following users thanked this post: SiliconWizard

Offline snarkysparky

  • Frequent Contributor
  • **
  • Posts: 414
  • Country: us
Re: Machine Learning Algorithms
« Reply #24 on: November 15, 2021, 01:08:00 pm »
Some situations where being right on average are good setups for AI.
Like sorting recyclables on a conveyor belt.  The missed edge cases are no more costly than a missed central case.

Like the casino.  They only need win 51% of the time.

But it seems AI is being applied to exclusively situations where this constraint is not acceptable.   Replacing human judgement in decisions that directly impact other humans for instance.   We don't want to accept a wrong decision as " oh well sht happens "

I am in the very skeptical camp about it.  I don't think any system will ever get situational awareness in a driving situation beating a functional human in the edge cases.
I don't want a self driving car to make a mistake and run into me and then be told self driving cars are better than the average human.   I expect the human even if below average to take responsibility for his actions.  I know...  frequently doesn't happen.

 
The following users thanked this post: SiliconWizard, emece67

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: Machine Learning Algorithms
« Reply #25 on: November 15, 2021, 01:16:45 pm »
One of the users here is involved in an ambitious automated ML project that is creating a framework to automatically generate appropriate ML models using a structured process.

It seems to me that  like making use of it is likely to function as a good way to learn which approaches can best solve a problem, given the available inputs, as well as the limitations of each approach, which are many, and always vary.

Computers are capable of making spectacular mistakes if you trust them too much.

Say you already have a problem in mind.

You have to be of the weal spots in your technology and how they might screw up. To do that you have to know what its doing and why, inside out.

That said, doing the work, especially using  - a variety of different frameworks is likely a good way to know their strengths and weaknesses and choosing the appropriate one .



« Last Edit: November 15, 2021, 01:26:30 pm by cdev »
"What the large print giveth, the small print taketh away."
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #26 on: November 15, 2021, 01:41:52 pm »
The only AI tech event worth watching are the Tesla Autonomy Day. They cover most of the question people are asking here.

Tesla Autonomy Day 2019 - Full Self-Driving Autopilot - Complete Investor Conference Event


Tesla Autonomy Day 2021 - Full Self-Driving Autopilot - Complete Investor Conference Event
48:44 - Tesla Vision
1:13:12 - Planning and Control
1:24:35 - Manual Labeling
1:28:11 - Auto Labeling
1:35:15 - Simulation
1:42:10 - Hardware Integration
1:45:40 - Dojo
2:05:14 - Tesla Bot
2:12:59 - Q&A

 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #27 on: November 15, 2021, 02:07:09 pm »
One of the users here is involved in an ambitious automated ML project that is creating a framework to automatically generate appropriate ML models using a structured process.

I suppose that if you believe magic works, you might as well believe that you can use magic to create magic.

Quote
It seems to me that  like making use of it is likely to function as a good way to learn which approaches can best solve a problem, given the available inputs, as well as the limitations of each approach, which are many, and always vary.

Nope. All it will give you is multiple examples of incomprehensible magic. You will be in a maze of twisty passages, all alike.

Quote
Computers are capable of making spectacular mistakes if you trust them too much.

Yup. But two wrongs don't make a right.

The "if you think this is bad you should see that" argument has always been weak and defeatist. If I ever find myself in a court of law, I'd love it if my opponent tried that argument!

Quote
You have to be of the weal spots in your technology and how they might screw up. To do that you have to know what its doing and why, inside out.

Yup and - without fundamental advances - that will continue to be a problem with ML.

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #28 on: November 15, 2021, 02:11:20 pm »
The only AI tech event worth watching are the Tesla Autonomy Day. They cover most of the question people are asking here.

I'm not going to spend 2.5 hours of my life listening to a PR flack avoiding difficult points.

Unless dyslexic, we can all read much faster than listen. In particular we can easily skip to the core arguments, and see how they hold up.

Is there a transscript, set of slides, or other material available?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #29 on: November 15, 2021, 04:03:05 pm »
I think most people miss how bad humans really are, when making the comparison with AI.  Yes, AI has problems, which are easy to see from the outside, but humans also:
  • Perform poorly in untrained environments, like driving in Wales after learning in California, to use an example from earlier in this thread.
  • Fail to understand their own cognition, so as to explain accurately why they did something or decided a certain way.*
  • Etc.
If you want AI to be perfect as seen from the outside, then the bar is pretty high.  Maybe even impossibly high.  But the bar to be "better than humans" is surprisingly low.



* It's interesting to me, to see a demonstration of this in people who have had the two hemispheres of their brains separated for whatever reason.  In one experiment, one side is told to choose an object, and the other side to explain why that object was chosen.  There is always "an explanation", but it's sometimes amusing to see what they come up with.  That demonstrated-capability, plus my own experience, tells me that most of our self-explanations are really justifications after the fact, and not recordings at all of what we were thinking.  Pretty much equal to "black box" AI in that respect.

We can certainly learn from these justifications of our previous decisions, but in AI terms, that's exactly "more training data".  We still don't have a record of the actual thought process itself.

Also note, that a human's age is also the amount of time that an equivalent AI would have to train for, with that person's experiences over that time, to become equal to that person.  (with "layers" of learning as a fundamental key concept: it seems to me that expecting today's AI to identify a school bus or a snowplow, is like expecting an infant to do that before they've even learned what a "shape" is)  Given the performance and time scale that we expect from AI, I think it's a grossly unfair comparison.  If it can be met, that would be great!  But it's still unfair.
(and most of us aren't that patient, especially investors and managers)
 

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #30 on: November 15, 2021, 04:17:39 pm »
Determinism is not required. There are many useful nondeterministic systems. Predictability is required.

Then I submit that, according to your own statement, there is nothing  wrong with an AI system, where having tried it on a million verification cases with known ground truth, it can be confidently predicted that the AI system will give the right answer in 98.7% of cases, despite not understanding how it works.

You seem to mistrust such systems as  you dont know how they work, because you dont "understand" how they produce answers. But if you consider further, we really no more "understand" the laws of physics. Any chain of "why" ultimately hits "don't know", e.g. why does this current flow: because of Maxwell's Laws, but why do Maxwell's equations take the form they do? There is nothing to say that such laws wont change tomorrow, or don't work in some corner case. There may be a heck of a lot more verification in the case of the laws of physics - but it's only a matter of the amount of data, not a fundamental difference in understanding.
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #31 on: November 15, 2021, 05:08:00 pm »
The only AI tech event worth watching are the Tesla Autonomy Day. They cover most of the question people are asking here.

I'm not going to spend 2.5 hours of my life listening to a PR flack avoiding difficult points.

Unless dyslexic, we can all read much faster than listen. In particular we can easily skip to the core arguments, and see how they hold up.

Is there a transscript, set of slides, or other material available?

Typical response from the uneducated. Stick to capacitors and inductors.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11260
  • Country: us
    • Personal site
Re: Machine Learning Algorithms
« Reply #32 on: November 15, 2021, 05:26:18 pm »
Typical response from the uneducated. Stick to capacitors and inductors.
No, it is a very typical fanboy method to use YouTube marketing videos from a sketchy corporation as a valid source of information. Tesla are invested in this, they will avoid discussing potential problems at all costs. All the discussion of potential issues is on the level of a job interview question "what is biggest weakness" - "I work too hard". BS.
Alex
 
The following users thanked this post: Siwastaja, bgm370

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #33 on: November 15, 2021, 05:46:26 pm »
The only AI tech event worth watching are the Tesla Autonomy Day. They cover most of the question people are asking here.

I'm not going to spend 2.5 hours of my life listening to a PR flack avoiding difficult points.

Unless dyslexic, we can all read much faster than listen. In particular we can easily skip to the core arguments, and see how they hold up.

Is there a transscript, set of slides, or other material available?

Typical response from the uneducated. Stick to capacitors and inductors.

Looks like an habit of yours to reply to posts with insults. I'm angry, tggzzz is uneducated, surely. Come on. Either you have actual articulated answers to formulate, and we can happily discuss, even if we don't agree, or you can just refrain yourself.

And yes, Tesla talks are marketing fluff for the most part. At least please point us to specific parts of the talk which could actually address any of the points we raised here. But surely, if you master the topic enough to be convinced we are just completely wrong, you can then give us strong arguments yourself instead of resorting to posting videos too.
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #34 on: November 15, 2021, 06:08:03 pm »
The only AI tech event worth watching are the Tesla Autonomy Day. They cover most of the question people are asking here.

I'm not going to spend 2.5 hours of my life listening to a PR flack avoiding difficult points.

Unless dyslexic, we can all read much faster than listen. In particular we can easily skip to the core arguments, and see how they hold up.

Is there a transscript, set of slides, or other material available?

Typical response from the uneducated. Stick to capacitors and inductors.

Looks like an habit of yours to reply to posts with insults. I'm angry, tggzzz is uneducated, surely. Come on. Either you have actual articulated answers to formulate, and we can happily discuss, even if we don't agree, or you can just refrain yourself.

And yes, Tesla talks are marketing fluff for the most part. At least please point us to specific parts of the talk which could actually address any of the points we raised here. But surely, if you master the topic enough to be convinced we are just completely wrong, you can then give us strong arguments yourself instead of resorting to posting videos too.

newton's 3rd law.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #35 on: November 15, 2021, 07:18:28 pm »
I think most people miss how bad humans really are, when making the comparison with AI.  Yes, AI has problems, which are easy to see from the outside, but humans also:
  • Perform poorly in untrained environments, like driving in Wales after learning in California, to use an example from earlier in this thread.
  • Fail to understand their own cognition, so as to explain accurately why they did something or decided a certain way.*
  • Etc.
If you want AI to be perfect as seen from the outside, then the bar is pretty high.  Maybe even impossibly high.  But the bar to be "better than humans" is surprisingly low.

Contrarywise, people often say "because the computer says so" as a justification - i.e. they do act as if the computer is infallible. That simplistic world view also leads people (both drivers and legislators) to put too much trust in automated driving systems.

It becomes completely untenable when nobody knows (or can know) why the computer "said so". That's cargo-cult decision making.

OTOH, they don't expect people to be infallible, and act accordingly. Good.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #36 on: November 15, 2021, 07:35:13 pm »
Determinism is not required. There are many useful nondeterministic systems. Predictability is required.

Then I submit that, according to your own statement, there is nothing  wrong with an AI system, where having tried it on a million verification cases with known ground truth, it can be confidently predicted that the AI system will give the right answer in 98.7% of cases, despite not understanding how it works.

That's poor reasoning.

If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)

Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?

Consider the medical diagnosis system that was eventually found to be using the font on the x-rays to diagnose how serious the condition was!

Quote
You seem to mistrust such systems as  you dont know how they work, because you dont "understand" how they produce answers. But if you consider further, we really no more "understand" the laws of physics. Any chain of "why" ultimately hits "don't know", e.g. why does this current flow: because of Maxwell's Laws, but why do Maxwell's equations take the form they do? There is nothing to say that such laws wont change tomorrow, or don't work in some corner case. There may be a heck of a lot more verification in the case of the laws of physics - but it's only a matter of the amount of data, not a fundamental difference in understanding.

You don't seem to understand science. In science the only thing that matters is predicting the result.

Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)

Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.
« Last Edit: November 15, 2021, 07:43:58 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #37 on: November 15, 2021, 07:39:03 pm »
The only AI tech event worth watching are the Tesla Autonomy Day. They cover most of the question people are asking here.

I'm not going to spend 2.5 hours of my life listening to a PR flack avoiding difficult points.

Unless dyslexic, we can all read much faster than listen. In particular we can easily skip to the core arguments, and see how they hold up.

Is there a transscript, set of slides, or other material available?

Typical response from the uneducated. Stick to capacitors and inductors.

Looks like an habit of yours to reply to posts with insults. I'm angry, tggzzz is uneducated, surely. Come on. Either you have actual articulated answers to formulate, and we can happily discuss, even if we don't agree, or you can just refrain yourself.

And yes, Tesla talks are marketing fluff for the most part. At least please point us to specific parts of the talk which could actually address any of the points we raised here. But surely, if you master the topic enough to be convinced we are just completely wrong, you can then give us strong arguments yourself instead of resorting to posting videos too.

newton's 3rd law.

Q.E.D.

There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #38 on: November 15, 2021, 07:42:11 pm »
Typical response from the uneducated. Stick to capacitors and inductors.
No, it is a very typical fanboy method to use YouTube marketing videos from a sketchy corporation as a valid source of information.

... often with an implicit "here's my statement, it up to you to prove me wrong".

That's nonsense of course; it is up to you to prove your statement.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #39 on: November 15, 2021, 08:00:43 pm »
Typical response from the uneducated. Stick to capacitors and inductors.
No, it is a very typical fanboy method to use YouTube marketing videos from a sketchy corporation as a valid source of information.

... often with an implicit "here's my statement, it up to you to prove me wrong".

That's nonsense of course; it is up to you to prove your statement.

Like I said, said you should stick to capacitors and inductors. nothing wrong with that, I don't expect relu pooling to find convergence with any of your remarks, Responding to idiots like you is enough for me, if you cannot jump/skip a video and get to the core parts where Andrey Kaparthy speaks about auto labeling and boost regresion unit tests with fleet feedback its straight to resistors for you my friend... and again that's your domain, bitching and crying about things you don't understand won't help you.   
 

Offline Simon

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: Machine Learning Algorithms
« Reply #40 on: November 15, 2021, 08:17:59 pm »
Typical response from the uneducated. Stick to capacitors and inductors.
No, it is a very typical fanboy method to use YouTube marketing videos from a sketchy corporation as a valid source of information.

... often with an implicit "here's my statement, it up to you to prove me wrong".

That's nonsense of course; it is up to you to prove your statement.

Like I said, said you should stick to capacitors and inductors. nothing wrong with that, I don't expect relu pooling to find convergence with any of your remarks, Responding to idiots like you is enough for me, if you cannot jump/skip a video and get to the core parts where Andrey Kaparthy speaks about auto labeling and boost regresion unit tests with fleet feedback its straight to resistors for you my friend... and again that's your domain, bitching and crying about things you don't understand won't help you.   


Unfortunately for you, you don't get to tell people what to do, that is a privilege reserved to few of us and I am one of those. I suggest you leave the topic that you are derailing with you babbling before I make you leave in a very permanent way. If anyone needs to stick to passive components only it's you but I suggest just sticking to resistors at first!
 

Offline diyaudio

  • Frequent Contributor
  • **
  • !
  • Posts: 683
  • Country: za
Re: Machine Learning Algorithms
« Reply #41 on: November 15, 2021, 08:20:37 pm »
Typical response from the uneducated. Stick to capacitors and inductors.
No, it is a very typical fanboy method to use YouTube marketing videos from a sketchy corporation as a valid source of information.

... often with an implicit "here's my statement, it up to you to prove me wrong".

That's nonsense of course; it is up to you to prove your statement.

Like I said, said you should stick to capacitors and inductors. nothing wrong with that, I don't expect relu pooling to find convergence with any of your remarks, Responding to idiots like you is enough for me, if you cannot jump/skip a video and get to the core parts where Andrey Kaparthy speaks about auto labeling and boost regresion unit tests with fleet feedback its straight to resistors for you my friend... and again that's your domain, bitching and crying about things you don't understand won't help you.   


Unfortunately for you, you don't get to tell people what to do, that is a privilege reserved to few of us and I am one of those. I suggest you leave the topic that you are derailing with you babbling before I make you leave in a very permanent way. If anyone needs to stick to passive components only it's you but I suggest just sticking to resistors at first!

Two flags from the same country backing each other up. Nice. you can fuck off and disable my account.
 

Offline Simon

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: Machine Learning Algorithms
« Reply #42 on: November 15, 2021, 08:30:11 pm »
Your wish is my command!
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #43 on: November 15, 2021, 09:01:02 pm »
Two flags from the same country backing each other up. Nice. you can fuck off and disable my account.

The quality of your reasoning (and I use that term loosely) doesn't inspire confidence that listening to "your" videos would be a good use of an afternoon.

Shame. If there is reason to believe I'm too pessimistic, I'd love to learn and revise my opinion.
« Last Edit: November 15, 2021, 09:03:06 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Simon

  • Global Moderator
  • *****
  • Posts: 17816
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: Machine Learning Algorithms
« Reply #44 on: November 15, 2021, 09:05:55 pm »
A bit late, he got his wish rather fast.
 
The following users thanked this post: RandallMcRee

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #45 on: November 15, 2021, 11:14:37 pm »
I think most people miss how bad humans really are, when making the comparison with AI.  Yes, AI has problems, which are easy to see from the outside, but humans also:
  • Perform poorly in untrained environments, like driving in Wales after learning in California, to use an example from earlier in this thread.
  • Fail to understand their own cognition, so as to explain accurately why they did something or decided a certain way.*
  • Etc.
If you want AI to be perfect as seen from the outside, then the bar is pretty high.  Maybe even impossibly high.  But the bar to be "better than humans" is surprisingly low.

Contrarywise, people often say "because the computer says so" as a justification - i.e. they do act as if the computer is infallible. That simplistic world view also leads people (both drivers and legislators) to put too much trust in automated driving systems.

Absolutely!  Like I said, it's easy to see the problems from the outside*, and the bar for perfection is extremely high.
(*Unless your job is essentially a "human terminal" that does nothing but data entry and readout, and translates that to/from a customer.  Government jobs seem to be rife with those, but they're not exclusive.)

"Garbage in, garbage out," will ALWAYS be true!  But again, humans also have that problem.  That should be a motivation to get the inputs right (counting the underlying logic as the result of more inputs), not to reject the system altogether.

It becomes completely untenable when nobody knows (or can know) why the computer "said so". That's cargo-cult decision making.

OTOH, they don't expect people to be infallible, and act accordingly. Good.

It's similarly hard to know why humans make the decisions that they do.  ("Idiot Compilations" on YouTube, for example, and I'm sure the more experienced among us have some personal stories to that effect, from when we should have known better but didn't use that knowledge...)  Same problem, but somehow we're more comfortable with one than with the other.

We already share the road/job-site/country/etc. with these people.  You might even be one at times.  Widespread automation will still make some mistakes, but ignoring the media hype and hollywood's depiction, I think even today's potential, installed and launched competently (yeah, that's not going to happen by a low-bid contractor), would be a vast improvement over the way we're doing things now.



I wonder if the problem is not so much the quality of decision-making, but the ability to assign blame.  We seem to be willing to accept a higher accident rate if we can blame a specific person for it.  We frame our arguments in terms of decision-making, often being honest about the machines and Dunning-Kruger about ourselves, but the real reason is not about that at all.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #46 on: November 15, 2021, 11:19:04 pm »
Getting back to the discussion a bit, a few points.

Determinism: I don't think that's exactly the issue here. Not just because this isn't really what matters, but also because, actually, current AI systems ARE deterministic. For a given a set of inputs, a given trained NN will give the same output(s). Likewise, for a given training dataset, a given NN structure will end up with the same coefficients. This may form a complex system, but it's still deterministic. Now for two sets of inputs that seem very close to *us*, NNs can sometimes give a completely different output. That's doesn't make them non-deterministic, if that's what those who mentioned the term meant. But that certainly makes them uncomprehensible to us.

Comparing to human intelligence: it's a kinda lost cause here. Especially regarding the ability to explain a given decision. Sure humans are not perfect and can also make bogus decisions. But the key difference is that people being in charge of critical decisions impacting others must usually document their decision before making it effective. That's how it's done in a lot of areas such as justice, medical, etc. At the moment, we somehow don't expect AI to provide the decision process (mostly because we are unable to do that technically for now), so it's completely different. Being able to explain a decision is a key part of any safety-critical process. It's even more important than just "being correct" per se.

Now that part may not be a completely lost cause with AI. We could design systems than are made to output the decision process in an understandable form before giving the decision itself. Yes, I've seen attempts at doing that in a couple papers. But so far, this is just research mostly. And it's not just about being able to implement this technically: it's also about willing to *enforce* it, and I haven't seen anything like that so far. That may change and regulations may come into place over time.

snarkysparky made a good point: there definitely are applications for which all this is NOT a problem, and for which a success rate above a certain threshold is perfectly good, whatever the reasons for the failing cases. But as he said, we seem to insist on applying AI to a lot of applications for which this is fundamentally not acceptable.

Then comes again the question of accountability. If a human adult makes a mistake with consequences, they'll be accountable (unless they are considered mentally deficient or something like that.) If some AI system makes a mistake with bad consequences, who the heck is going to be accountable exactly? It's still a major question for which I haven't really seen a proper and definite answer. Will it be company directly providing the system using AI? Will it be the company which has designed the AI subsystem itself? Will it be the company which has tdesigned the datasets and trained the AI subsystem? Or will it be the end-user? It's all a big fuzzy mess, but I'll be glad to hear about some progress about this, maybe there is!

Also, if we think about AI as a tool - which it is - it's quite normal that we expect it to perform in a predictable and understandable way. To make a fun parallel, imagine you buy a hammer that goes down when you give it a downwards movement 99% of the time, but for 1% it will go up and hit anything else it might get into. Does that sound like a decent tool? Also for law of physics: we may not understand them fully yet, for sure, we still have a lot to learn. But in a given context, the laws we have determined still hold 100% of the time. Quantum gravity is a complex matter, for sure, but if I jump off a bridge, there's 100% probability that I'll fall down and 0% that I'll magically go up and end up orbiting the Earth. What's interesting is the question of why some of us seem to be willing to consider AI not as a tool, but as something else.
« Last Edit: November 15, 2021, 11:31:34 pm by SiliconWizard »
 
The following users thanked this post: AaronD

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #47 on: November 16, 2021, 12:36:36 am »
SiliconWizard makes sane points :)

If an NN gives a completely diffferent output for a trivially different input, then it not only is it "uncomprehensible to us" but it is also unpredictable and therefore unreliable. (Exception: boring systems where it doesn't matter if a mistake is made).

As for who is accountable if an NN system fails disastrously, that is becoming clearer with autonomous cars: the driver. Yes, that seriously compromises the utility pf the autonomous features.

That was illustrated a couple of weeks ago when I spent half an hour chatting to a Tesla sales droid. He happily spouted Tesla's standard DoubleSpeak, giving the impression that you could relax while the car navigated itself, but, when pushed, that the driver was always in control. He didn't define what that means in practice.

I noted that sometimes the Tesla autopilot realised it was confused and handed control back to the driver (quite reasonably). I asked how much warning a driver had of that, and the answer was waffle about special cases. He refused to engage with the fact that a human that is not paying attention to the road will take 5-15s to be in a position to make an appropriate decision.

I asked the droid to show me how to adjust the ventilation system so that it would blow hot air over the windscreen to clear mist. That's a typical action here at this time of year :(

His first attempt was to use the giant flatscreen touchcreen with small poor-contrast fonts. After looking away from the road for the best part of 60s and fondling the screen, he partially succeeded. I noted he was not paying sufficient attention to the road to be in control of the vehicle, and did that mean it could only be safely adjusted when stationary? He mumbled, and said when driving it could be done using voice control.

That would be reasonable, so I asked him to demonstrate it. After a few attempts he only managed to turn on the heater in the seat. Snort.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #48 on: November 16, 2021, 01:11:19 am »
I noted that sometimes the Tesla autopilot realised it was confused and handed control back to the driver (quite reasonably)......a human that is not paying attention to the road will take 5-15s to be in a position to make an appropriate decision.

That is the "automation paradox".  The same can be said for airplane autopilots and internet content filters.  The system makes a vast improvement, but when it does inevitably fail, the failure is made worse by the human not being competent.  (whether you put a "yet" or "anymore" at the end of that doesn't matter)

Nevertheless, even the inclusion of those failures and their new consequences, still leaves it better than supposedly-competent all-human control.  Humans are amazingly unpredictable and often not smart.  Including the extensively-trained ones, but especially for the general public.



Sounds like Tesla sent a trained parrot and not a real expert.  As soon as you got him outside of his training, he fell apart.

Also, I had some brief involvement as a contractor in a Tesla car factory, installing a new production line.  One of the other guys on my team commented about Tesla not being a car company, but a Silicon Valley tech company that only happens to make cars instead of web services.  I think he was right.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #49 on: November 16, 2021, 09:04:09 am »
I noted that sometimes the Tesla autopilot realised it was confused and handed control back to the driver (quite reasonably)......a human that is not paying attention to the road will take 5-15s to be in a position to make an appropriate decision.

That is the "automation paradox".  The same can be said for airplane autopilots and internet content filters.  The system makes a vast improvement, but when it does inevitably fail, the failure is made worse by the human not being competent.  (whether you put a "yet" or "anymore" at the end of that doesn't matter)

Yup, and human factors should never be ignored! Ignore them and you someone else will be bitten.

Regarding aircraft autopilots, lore has it that the last words on cockpit voice recorders are often "what's it doing now?".

Many people would disagree that automated content filters aren't dangerous. It is even possible to conceive of circumstances in which fatalities could occur!

Quote
Nevertheless, even the inclusion of those failures and their new consequences, still leaves it better than supposedly-competent all-human control.

That's questionable. Boeing has taken that attitude and it has destroyed the company's reputation. Think 737-MAX and Starliner. ("If it isn't Boeing I'm not going" is now laughable)

It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

Quote
Humans are amazingly unpredictable and often not smart.  Including the extensively-trained ones, but especially for the general public.

Agreed. If autopilots fail with highly trained personnel in constrained environments, what chance is there with untrained personnel in unpredictable environments? But this just isn't about cars; legal law and jailtime can be are involved.

Here's a recent misclassification which resulted in legal proceedings. https://catless.ncl.ac.uk/Risks/32/91#subj1 In this case the error was so obvious (and amusing) that the proceedings were aborted, but many won't be. Start with facial recognition, and not much imagination is required.

Quote
Sounds like Tesla sent a trained parrot and not a real expert.  As soon as you got him outside of his training, he fell apart.

He was the salesman in a showroom with one car in it. On the important subjects, he was clearly parroting the company line.

Controlling the air circulation is something every driver will have to do frequently. Bad conceptual design allowed bad ML to make it so complicated it was dangerous.

Quote
Also, I had some brief involvement as a contractor in a Tesla car factory, installing a new production line.  One of the other guys on my team commented about Tesla not being a car company, but a Silicon Valley tech company that only happens to make cars instead of web services.  I think he was right.

Yes, and with the silicon valley culture of shipping betas, letting the customer discover faults, and hiding behind "commercial confidentiality" to avoid inspection. In practice all commercial ML will be like that :(

Tesla also updates the ML without your permission, so that a car which detected/avoided a problem today might not tomorrow. "What's my car doing today?"
« Last Edit: November 16, 2021, 09:08:04 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #50 on: November 16, 2021, 04:31:10 pm »
If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)

Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?

That's not how things work in real life. If we knew which of the one in a million (or whatever the fraction actually is) flights is the one that is going to crash, we would not get on it. Instead, we take a flight knowing that the risk of a crash is small. Asking for a system with no unpredictable failures is unrealistic. They can occur due to programming errors even if the algorithm is well characterised, hardware failures, cosmic rays, operator error, and so on.


Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)

Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.

Again, you misunderstand how deep learning is done. When building a deep learning system, fitting the hypothesis to the previous observations is called training. Using the hypothesis to make falsifiable predictions on different data is then called verification. The aim of doing so is to ensure the model generalises to new, unseen data. Both of your steps are used. Clearly, there needs to be a lot of care taken in ensuring these data sets are independent, are representative of the real uses to which the model is put and so on, and these are not easy. But there is nothing fundamentally different or unscientific about deep learning.

You dont like deep learning as you "dont understand what is inside the black box". But why should we trust an inverse square law for electrostatics? We no more "understand" why nature should apparently follow a simple mathematical rule in this case, and where and how that rule may break down. People trusted Newtonian mechanics until relativity showed it to be a poor description in some cases.

Anyway, let me finish this discussion with the observation that you are unlikely to see me in a self driving car in the near future. Given the current success rate of deep learning systems on much simpler, and less safety critical, computer vision systems, and that driving is a much more complex problem with many "unknown unknowns", I do not believe they are likely to reach an acceptable (to me) success rate in the near future, except in very controlled conditions. This is not because the method is flawed per se, but because the problem is too difficult.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #51 on: November 16, 2021, 04:53:44 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

The explanation that I heard for Boeing's recent debacles was that they trusted outside contractors too much.  They used to know what they're doing and did it themselves, so it was right.  Now, they contract out important stuff to people who don't have a clue but are much cheaper, and neglect to tell them ALL of the requirements because they're used to it being common knowledge.  Turns out it isn't, and they end up with a software product that has an input for a redundant sensor but doesn't actually use it...

The biggest hindrance to widespread automation is NOT the engineering.  It's the short-sighted idiot bean-counters that routinely take over engineering and screw it up.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #52 on: November 16, 2021, 05:02:41 pm »
If you can predict the 1.3% of cases in which it will fail, then that would be very acceptable - since we could just ignore/discount the result. (E.g. if it doesn't work the 1.3% of the time the temperature is below -5C, then we wouldn't use it in cold weather)

Would you be content if the 1.3% resulted in you being seriously injured or locked up in jail?

That's not how things work in real life. If we knew which of the one in a million (or whatever the fraction actually is) flights is the one that is going to crash, we would not get on it.

I'm well aware of that!

Quote
Instead, we take a flight knowing that the risk of a crash is small. Asking for a system with no unpredictable failures is unrealistic.  They can occur due to programming errors even if the algorithm is well characterised, hardware failures, cosmic rays, operator error, and so on.

You continue to miss the point.

It is unreasonable to base a safety critical system on a technology and implementation that is not subject to inspection, understanding, and validation.

Quote
Fitting a hypothesis to previous observations is not science. (E.g. gold is a good invesment because it went up 50% last week is an argument that only charlatans would use!)

Fitting a hypothesis to previous observations and then using the hypothesis to make falsifiable predictions is science.

Again, you misunderstand how deep learning is done. When building a deep learning system, fitting the hypothesis to the previous observations is called training. Using the hypothesis to make falsifiable predictions on different data is then called verification. The aim of doing so is to ensure the model generalises to new, unseen data. Both of your steps are used. Clearly, there needs to be a lot of care taken in ensuring these data sets are independent, are representative of the real uses to which the model is put and so on, and these are not easy. But there is nothing fundamentally different or unscientific about deep learning.

There's an old engineering maxim that young software developers seem to be unable to comprehend: "you can't test quality into a product (it has to be designed in)". Verification is merely another name for testing.

Quote
You dont like deep learning as you "dont understand what is inside the black box". But why should we trust an inverse square law for electrostatics? We no more "understand" why nature should apparently follow a simple mathematical rule in this case, and where and how that rule may break down. People trusted Newtonian mechanics until relativity showed it to be a poor description in some cases.

No, I don't dislike it for that reason. I dislike it because nobody, not even the designers can understand it.

Quote
Anyway, let me finish this discussion with the observation that you are unlikely to see me in a self driving car in the near future. Given the current success rate of deep learning systems on much simpler, and less safety critical, computer vision systems, and that driving is a much more complex problem with many "unknown unknowns", I do not believe they are likely to reach an acceptable (to me) success rate in the near future, except in very controlled conditions. This is not because the method is flawed per se, but because the problem is too difficult.

Quite, although you might end up on top of somebody else's self-driving car :) The problem is that ML is being applied to safety critical systems, regardless of the lack of suitability. And I include "medical diagnosis" and "court sentencing" as safety critical systems.

The problems I have noted are common to all ML systems. A reputable non-alarmist technophile organisation (the IEEE) has a decent short introductory article to ML problems at https://spectrum.ieee.org/ai-failures
  • Brittleness
  • Embedded Bias
  • Catastrophic Forgetting (particularly relevant to your verification contentions)
  • Explainability
  • Quantifying Uncertainty
  • Common Sense
  • Math
Now I'll concede that in humans "common sense isn't", and that maths isn't necessarily a problem.

If you don't think the other problems are important or real, I'd be interested to hear your reasoning.


« Last Edit: November 16, 2021, 05:16:46 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #53 on: November 16, 2021, 05:11:53 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

That is the standard contention, and there is some validity to it. But it isn't completely clear-cut.

It definitely isn't obvious in medical diagnosis and court sentencing ML applications.

Quote
The explanation that I heard for Boeing's recent debacles was that they trusted outside contractors too much.  They used to know what they're doing and did it themselves, so it was right.  Now, they contract out important stuff to people who don't have a clue but are much cheaper, and neglect to tell them ALL of the requirements because they're used to it being common knowledge.  Turns out it isn't, and they end up with a software product that has an input for a redundant sensor but doesn't actually use it...

The biggest hindrance to widespread automation is NOT the engineering.  It's the short-sighted idiot bean-counters that routinely take over engineering and screw it up.

The first paragraph is irrelevant, even if true. N.B. the brown stuff has hit the fan and will land everywhere. One Boeing employee (Mark A. Forkner) has already been indicted.

The second paragraph might be valid somewhere, but unfortunately not on Planet Earth. That's the way things work here :(

Do have a look at the examples in https://spectrum.ieee.org/ai-failures
« Last Edit: November 16, 2021, 05:20:11 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #54 on: November 16, 2021, 05:25:23 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

In cases the root cause is automation, you're right, except that the fact pilots don't understand what automation is doing at some point and thus don't know how to take corrective actions that COULD have been taken if what automation was doing was clearer, is still a major issue and can be seen in many of these cases, including of course the Boeing debacle. Had the pilots understood what automation was doing, the planes would never have crashed.

It's not about getting the perfect tools, it's about getting decent tools that their users know well.

And, OTOH, there are of course a number of crashes not caused by automation at all, but by various hardware failures, for instance. In which case we have ample proof of how pilots can react in those cases and how some are able to land safely with a severly damaged plane. So we definitely know how humans can react to completely unexpected events in a much better way than any machine could.

To me, the Boeing issue is very telling. Sure we can say that it's huge design mistake. But that will happen again. No design process is perfect, and even though it's kind of easy in this example to put blames, there are cases for which it's a lot less. Critical systems must always be designed so that they are resilient. That includes the obvious redundancy, which was largely missing in the Boeing's case, and enabling users to take corrective actions.

And good thing here that the software used for the MCAS was infinitely simpler in itself, and easier to understand, than any AI-based stuff. So we could at least determine what the problem was, and fix it. If we don't know what the problem is, we can never fix it. Again if we can't analyze why a given system fails, we can't fix it. We can only run in circles like flies and frantically retrain NNs until we seem to get an even better success rate than the previous version with larger/seemingly "better" training datasets, and cross fingers. That's an odd way of considering safety and correctness.

Also, pure statistics are great for some things, less interesting for others too. I gave this fun hammer example. But it's IMO an interesting question.
Say we have one fully automated system for which extensive tests have shown a correct behavior rate of 99%. Now say that an equivalent approach with a less automated system and more human control is estimated to have a rate of 98%. Which one are you going to feel safer with? Which one seems best for long-term use? Which one is easier to fix or improve? There are underlying questions that are a lot more complex than they might seem.

And accountability is also a major point here IMO. No it's not per se about "who to put the blame on" so we can get some feeling of revenge and move on. Accountability is there to give a strong incentive both to limit errors before they happen, but also to fix errors when they do happen. Without accountability, there is exactly ZERO incentive to fix/improve anything, except maybe just for marketing reasons. "Look, my autonomous plane has 0.1% probability of crashing, yours has 0.2% ! Buy me!". So lack of accountability = design things to the minimum level of safety possible and put profitability before safety.

« Last Edit: November 16, 2021, 05:27:19 pm by SiliconWizard »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #55 on: November 16, 2021, 05:45:33 pm »
It is a recognised problem that pilots are becoming automation controllers at the expense of stick-and-rudder competency. That's caused many crashes.

...and saved countless others.  Nobody records what would have happened but didn't, so the data is skewed.

In cases the root cause is automation, you're right, except that the fact pilots don't understand what automation is doing at some point and thus don't know how to take corrective actions that COULD have been taken if what automation was doing was clearer, is still a major issue and can be seen in many of these cases, including of course the Boeing debacle. Had the pilots understood what automation was doing, the planes would never have crashed.

It's not about getting the perfect tools, it's about getting decent tools that their users know well.

And, OTOH, there are of course a number of crashes not caused by automation at all, but by various hardware failures, for instance. In which case we have ample proof of how pilots can react in those cases and how some are able to land safely with a severly damaged plane. So we definitely know how humans can react to completely unexpected events in a much better way than any machine could.

Precisely, on all counts.

A personal example is that I've safely stopped a car after a wheel fell off when overtaking. I wonder what a Tesla would have done?

My favourite two aircraft example are safe landings of a B52 missing a wing and an F15 missing a wing. They are easy to locate, and there are videos of the latter. Plus, of course, there is the stunning UA232 which lost all control surfaces; they even made a (poor) movie about that one.



Your points below are also valid.

Quote
To me, the Boeing issue is very telling. Sure we can say that it's huge design mistake. But that will happen again. No design process is perfect, and even though it's kind of easy in this example to put blames, there are cases for which it's a lot less. Critical systems must always be designed so that they are resilient. That includes the obvious redundancy, which was largely missing in the Boeing's case, and enabling users to take corrective actions.

And good thing here that the software used for the MCAS was infinitely simpler in itself, and easier to understand, than any AI-based stuff. So we could at least determine what the problem was, and fix it. If we don't know what the problem is, we can never fix it. Again if we can't analyze why a given system fails, we can't fix it. We can only run in circles like flies and frantically retrain NNs until we seem to get an even better success rate than the previous version with larger/seemingly "better" training datasets, and cross fingers. That's an odd way of considering safety and correctness.

Also, pure statistics are great for some things, less interesting for others too. I gave this fun hammer example. But it's IMO an interesting question.
Say we have one fully automated system for which extensive tests have shown a correct behavior rate of 99%. Now say that an equivalent approach with a less automated system and more human control is estimated to have a rate of 98%. Which one are you going to feel safer with? Which one seems best for long-term use? Which one is easier to fix or improve? There are underlying questions that are a lot more complex than they might seem.

And accountability is also a major point here IMO. No it's not per se about "who to put the blame on" so we can get some feeling of revenge and move on. Accountability is there to give a strong incentive both to limit errors before they happen, but also to fix errors when they do happen. Without accountability, there is exactly ZERO incentive to fix/improve anything, except maybe just for marketing reasons. "Look, my autonomous plane has 0.1% probability of crashing, yours has 0.2% ! Buy me!". So lack of accountability = design things to the minimum level of safety possible and put profitability before safety.
« Last Edit: November 16, 2021, 05:47:18 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #56 on: November 16, 2021, 05:54:36 pm »
The root cause for "I don't actually know how to fly at all so I crash" accidents (which are indeed very numerous - typical example being pulling the nose up in panic when stick shaker activates indicating stall) is not the addition of automation, but the vast increase in flying, and especially cheap flights. Specifically, in early 2000's the problem was sudden and huge, airlines just needed to hire whomever they can, no need for exceptional skills, no need for ambition for flying. And no money, no time for thorough training!

Almost overnight, the "human related accidents" changed from mishaps caused by very skilled but unquestioned hero captain, where skilled F.O. would have been able to prevent the crash but couldn't question the captain, into a completely new genre where there are two pilots in the cockpit neither of whom have no idea how to fly and what to do in completely normal situations.

Automation can be blamed though because it was the enabler for this. These crap pilots kind of learn how to fly, but without automation, they would create much larger number of accidents; to the point of no one daring to fly, it would be just impractical. So enter automation; as it stands, these pilots only cause an accident whenever the automation decides to let the pilot handle the situation for whatever reason, or disable automated safety features (due to sensor malfunction, for example).

Tesla Autopilot is similar. Give it to a drunk idiot and it will easily save lives by driving better, more reliably, and, more predictably than said drunk idiot. But the comparison is moot. We shouldn't let drunk idiots drive to begin with.
« Last Edit: November 16, 2021, 05:57:42 pm by Siwastaja »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #57 on: November 16, 2021, 07:20:58 pm »
The root cause for "I don't actually know how to fly at all so I crash" accidents (which are indeed very numerous - typical example being pulling the nose up in panic when stick shaker activates indicating stall) is not the addition of automation, but the vast increase in flying, and especially cheap flights. Specifically, in early 2000's the problem was sudden and huge, airlines just needed to hire whomever they can, no need for exceptional skills, no need for ambition for flying. And no money, no time for thorough training!

Almost overnight, the "human related accidents" changed from mishaps caused by very skilled but unquestioned hero captain, where skilled F.O. would have been able to prevent the crash but couldn't question the captain, into a completely new genre where there are two pilots in the cockpit neither of whom have no idea how to fly and what to do in completely normal situations.

Automation can be blamed though because it was the enabler for this. These crap pilots kind of learn how to fly, but without automation, they would create much larger number of accidents; to the point of no one daring to fly, it would be just impractical. So enter automation; as it stands, these pilots only cause an accident whenever the automation decides to let the pilot handle the situation for whatever reason, or disable automated safety features (due to sensor malfunction, for example).

Tesla Autopilot is similar. Give it to a drunk idiot and it will easily save lives by driving better, more reliably, and, more predictably than said drunk idiot. But the comparison is moot. We shouldn't let drunk idiots drive to begin with.

But they will anyway.  So it's better to get them home without their involvement.  A car that has no facilities at all for a human to micro-manage it would be wonderful in that sense.  (and opens up a different can of worms in another)

And the bean counters will continue to skimp on whatever they can, including training.  So the equivalent of a "drunk pilot" will continue to exist as well.



In addition to a bunch of cheap flights, there's a shortage of skilled pilots to start with, because that generation is in the process of retiring now, and the new generation just isn't interested.  It's too expensive to meet the legal standard, a lot of which comes out of their own pockets *in hopes* of getting hired somewhere.  And it's not inherently exciting anymore, like it was a generation ago.  So the financially prudent ones that aren't independently wealthy, do something else that's a lot less risky.

So the shortage of skill continues, which provides the motivation to automate.  Change the law, not to make it easier to be allowed to hand-fly with commercial passengers, but to apply (at least) the same standard of reliability to an automated system.  Possibly more.  Certify the aircraft with the automation in place, as an integral part of the aircraft and as part of the certification, in a larger system that allows a swarm of them to operate with no human control whatsoever.  (a lot of that system already exists, in various forms of pilot assistance)  The entire process is designed to have that level of gate-to-gate reliability (or driveway-to-driveway?) as part of the certification itself.  The younger generations that only care about getting from A to B safely, *regardless of how it's done*, will get their wish.

"Automated aircraft" is not a new concept; the technology has existed for a couple of decades already to do it, and there's been a lot *more* serious engineering since then.  The real problem is convincing the old-generation bureaucrats who are cognitively rigid in the old *must be human!* dogma, and don't want to make themselves irrelevant, to allow it to an extent that actually *works*.
(When these regulators were still mentally plastic, we DIDN'T have machines that could do this, and so the dogma was well founded.  Not anymore.)

Because of the automation paradox, partial solutions tend to be worse than either extreme, so it's unfair to tentatively mix in just a little bit and then kill the project because the approach itself set it up to fail.  Automated cars for another example: In a system that actually realizes the practical benefits of that (bumper to bumper at Mach 0.5; entry, exit, and flat interchanges at that speed; etc.), even one human that insists on manual control is going to cause the biggest pileup in history.  Ruthlessly forbid manual control in such a system, and it all works smoothly.

(I remember reading a sci-fi "slice-of-life" story about a car salesperson, where the justification for the story was that an old internal-combustion truck that depended on a still in the owner's backyard, had just become illegal to drive to market because the highway in between became "automated only", and it was physically blocked from entering.  Newer vehicles would automatically disable the manual controls when passing that point.  The rest of the story was the process of selling a modern vehicle to this luddite while addressing their concerns.  I think that author has the right understanding.)
« Last Edit: November 16, 2021, 07:26:06 pm by AaronD »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #58 on: November 16, 2021, 07:44:24 pm »
Because of the automation paradox, partial solutions tend to be worse than either extreme,

Precisely right. The boundary and handover is a real problem.

Quote
so it's unfair to tentatively mix in just a little bit and then kill the project because the approach itself set it up to fail.

Not quite.

If a "little bit mixed in" is all that is done, then it should be killed. Either do something that can be proved to work properly, or don't do it.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2591
  • Country: us
  • Not An Expert
Re: Machine Learning Algorithms
« Reply #59 on: November 16, 2021, 09:17:40 pm »
Same things can be said about humans.  But this wouldn't answer the question from the OP.  There are places where ML fits and other places where it doesn't.

Key differences: the human can explain why they made a decision.

Sorta, maybe, kinda, not really.  At least not reliably......
https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #60 on: November 16, 2021, 09:41:23 pm »
Same things can be said about humans.  But this wouldn't answer the question from the OP.  There are places where ML fits and other places where it doesn't.

Key differences: the human can explain why they made a decision.

Sorta, maybe, kinda, not really.  At least not reliably......
https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow

It is regarded as poor that the "worst" humans struggle to explain their reasoning.
Even the "best" ML systems can't begin to explain their reasoning.
Spot the difference!

The whole article https://spectrum.ieee.org/ai-failures is well worth reading, since it includes many pertinent examples. However here's the bit on "explainability", with my emphasis...

Quote
Why does an AI suspect a person might be a criminal or have cancer? The explanation for this and other high-stakes predictions can have many legal, medical, and other consequences. The way in which AIs reach conclusions has long been considered a mysterious black box, leading to many attempts to devise ways to explain AIs' inner workings. "However, my recent work suggests the field of explainability is getting somewhat stuck," says Auburn's Nguyen.

Nguyen and his colleagues investigated seven different techniques that researchers have developed to attribute explanations for AI decisions—for instance, what makes an image of a matchstick a matchstick? Is it the flame or the wooden stick? They discovered that many of these methods "are quite unstable," Nguyen says. "They can give you different explanations every time."

In addition, while one attribution method might work on one set of neural networks, "it might fail completely on another set," Nguyen adds. The future of explainability may involve building databases of correct explanations, Nguyen says. Attribution methods can then go to such knowledge bases "and search for facts that might explain decisions," he says.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Machine Learning Algorithms
« Reply #61 on: November 17, 2021, 10:02:50 pm »
The CNN algorithm is perfectly capable of producing humanly acceptable explanation. For example, in judicial system, the algorithm can produce an explanation similar to: "here's 10 most similar cases. In 9 of these 10 cases there was a death sentence, so I recommend the death sentence as well". Such explanation is actually very similar to what judge may say - the first thing the judge would look at are rulings in similar cases. And similar to the judge the software may be corrupted (by hackers, bugs, or whatnot), and may be made to disregard some relevant cases.

CNN is not really self-learning. Neural networks are. Here the problem is that relatively large neural network may get its own agenda and decide not to pursue the goals posted by humans, or even may work against humans. This will be real horror, although humans will probably not see it not until it's too late.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #62 on: November 17, 2021, 10:15:00 pm »
The CNN algorithm is perfectly capable of producing humanly acceptable explanation. For example, in judicial system, the algorithm can produce an explanation similar to: "here's 10 most similar cases. In 9 of these 10 cases there was a death sentence, so I recommend the death sentence as well". Such explanation is actually very similar to what judge may say - the first thing the judge would look at are rulings in similar cases. And similar to the judge the software may be corrupted (by hackers, bugs, or whatnot), and may be made to disregard some relevant cases.

CNN is not really self-learning. Neural networks are. Here the problem is that relatively large neural network may get its own agenda and decide not to pursue the goals posted by humans, or even may work against humans. This will be real horror, although humans will probably not see it not until it's too late.

If it isn't a neural network, then I presume it is a forward/backward chaining expert system fashionable in the 80s. Where rules are explicitly coded, yes of course it can give an explanation.

Unfortunately neural nets are descendents of Igor Alexander's WISARD. That distinguished well between cars and tanks in the lab, but failed dismally in the field. Eventually they realised it had trained itself to distinguish between cloudy and sunny days. It is said colleagues then refused to acknowledge Alexander's presence on sunny days :)

Yes, that kind of problem is being rediscovered by today's youngsters. Yawn.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #63 on: November 17, 2021, 10:22:25 pm »
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human?  Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.  Then after a similar time that it takes a human to fully mature (big disqualifier in today's instant world), the AI will make similar decisions and be able to explain them in terms of what each layer came up with.

And is that really what WE call "explanation" too?  Just a "debug spew" of what each hierarchical layer had for an answer?  If one of them is determined to be wrong, then that's training data for that layer, but still no explanation for how it got what it did from what the previous layer gave it.  I think that applies surprisingly well for *us* too.
 

Online NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9018
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Machine Learning Algorithms
« Reply #64 on: November 18, 2021, 12:47:15 am »
Quick overview of three types of machine learning:
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #65 on: November 18, 2021, 01:00:03 am »
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human?  Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.

Congratulations.

You've just reinvented the approach used in expert systems in the 80s :) There are even languages for those techniques. Search terms: forward chaining, backward chaining, Horn clauses.

(BTW, welcome to the Triumphant Re-inventors club  :) We all do that from time to time; I did it with FSMs and microcoding).
« Last Edit: November 18, 2021, 01:10:43 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #66 on: November 18, 2021, 01:48:12 am »
Yep. The techniques and knowledge we have about them haven't changed all that much actually. What has is technology - the computational power we have at our disposal, which now makes some approaches, that were once unpractical, usable.

There definitely are hybrid approaches too - that unfortunately mostly stay in academic circles, probably because they are not hype enough. One common hybrid approach is to have a "good old "rule-based system, being coupled to a NN, either to determine the rules themselves, or to adjust/improve them as the system is being used. I rather like this approach. The rules themselves are then perfectly understandable. They can be fully automatically derived from training data as well, but it's also possible to verify them and hand-modify the ones that would appear to be bogus.

The hype about current AI (which is definitely not what all AI is about either) reminds me a bit about the hype there was on fuzzy logic few decades ago. Manufacturers started shoving fuzzy logic everywhere, even when a PID would have worked at least as well. The hype passed. And I find this kind of "debacle" (maybe too strong a word though) a shame: fuzzy logic has some interesting things to it actually, way beyond how it was used back then in industry - I suggest reading literature about it, starting with Zadeh's papers of course. You may find concepts and ideas that are a lot more interesting than what has been said about it (at least ever since it went out of fashion.)
« Last Edit: November 18, 2021, 01:50:21 am by SiliconWizard »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #67 on: November 18, 2021, 02:45:09 am »
I wonder, if the best way to produce a human-equivalent explanation is to organize and train the AI as if it were human?  Use a layered approach, learning basic concepts first, none of which are the end goal, and reinforcing them into oblivion in random situations with all of the related inconveniences, then slightly more advanced but still quite simple, etc., with each step building on the capabilities of the previous one.

Congratulations.

You've just reinvented the approach used in expert systems in the 80s :) There are even languages for those techniques. Search terms: forward chaining, backward chaining, Horn clauses.

(BTW, welcome to the Triumphant Re-inventors club  :) We all do that from time to time; I did it with FSMs and microcoding).

Ha!  Okay.  I hadn't seen that, but I wouldn't have thought to look there either.

I did it with IEEE754 floating point too.  I needed to compress a wide range of integers into a single byte and then decompress it on an 8-bit microcontroller, and my initial thought was that the IEEE version was too complicated.  But by the time I had solved all the problems with my version, it was pretty much *exactly* IEEE754, just with fewer bits.  So now I know why *that* is the way it is.  :)
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #68 on: November 18, 2021, 09:27:45 am »
Yep. The techniques and knowledge we have about them haven't changed all that much actually. What has is technology - the computational power we have at our disposal, which now makes some approaches, that were once unpractical, usable.

There definitely are hybrid approaches too - that unfortunately mostly stay in academic circles, probably because they are not hype enough. One common hybrid approach is to have a "good old "rule-based system, being coupled to a NN, either to determine the rules themselves, or to adjust/improve them as the system is being used. I rather like this approach. The rules themselves are then perfectly understandable. They can be fully automatically derived from training data as well, but it's also possible to verify them and hand-modify the ones that would appear to be bogus.

The hype about current AI (which is definitely not what all AI is about either) reminds me a bit about the hype there was on fuzzy logic few decades ago. Manufacturers started shoving fuzzy logic everywhere, even when a PID would have worked at least as well. The hype passed. And I find this kind of "debacle" (maybe too strong a word though) a shame: fuzzy logic has some interesting things to it actually, way beyond how it was used back then in industry - I suggest reading literature about it, starting with Zadeh's papers of course. You may find concepts and ideas that are a lot more interesting than what has been said about it (at least ever since it went out of fashion.)

Yup!

The hybrid approach does use the standard engineering technique: decomposition into small independent sections that are testable in isolation. The ML mob ignores that concept in favour of magic.

It has to be said that some problems aren't amenable to that, e.g. automated translation, since they do require global context to avoid the "out of sight out of mind -> invisible idiot" problem.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #69 on: November 18, 2021, 05:08:26 pm »
There may be a bunch of "followers" favoring magic, but this point itself is nothing new. When something becomes the de facto approach, whatever the reason, people will tend to flock to it. Actually, those who don't might be even considered idiots. (Yes does that ring a bell? ;D ) That's how trends have always worked.

It's interesting to consider how it subtly drifts off the standard engineering techniques, as you said. This trend is IMO not restricted to AI/ML, but that would be a whole topic in itself. I think it goes hand in hand with what I mentioned earlier, an apparent will to get rid of the concept of accountability. Here as well, it's absolutely not restricted to AI. It seems to be a deep change in society that's happening. Tell me I'm wrong though!

But regarding ML, I think there's more to it than that. What does ML currently feed off? Huge amounts of data. That's ML's fuel. And data has become the XXIst century's goldmine. Is that a wonder ML is pushed at all costs by giant tech companies? So now, as we can even read in this thread, people think we can solve all problems with more data.

We know though that large amounts of data, improperly used, can lead to absolutely any conclusion and its opposite. Yes, even the same data. Classic fun: https://tylervigen.com/spurious-correlations
« Last Edit: November 18, 2021, 05:12:24 pm by SiliconWizard »
 

Offline SuzyC

  • Frequent Contributor
  • **
  • Posts: 792
Re: Machine Learning Algorithms
« Reply #70 on: November 18, 2021, 05:22:56 pm »
Suppose I wanted the fewest cost, lowest size of hardware to solve a "simple" problem, like a device that  could perform satisfactorily to recognize a few command words.

 I have also seen working examples of fuzzy logic  used with a 8-bit microcontroller that successfully learns to balance a  double-pendulum.

Which brings to mind quickly two questions:

 How much ML or NN hardware components would be required to do the same two example tasks?

 Why is fuzzy logic no longer in fashion to create intelligent devices?
« Last Edit: November 18, 2021, 05:31:02 pm by SuzyC »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #71 on: November 18, 2021, 05:59:41 pm »
Fuzzy logic is not dead. There still are numerous books and papers about it.
Just as examples in areas where CNNs are used:
https://link.springer.com/book/10.1007/978-1-4614-6666-6
https://ieeexplore.ieee.org/document/8812593

I also suggest reading this: https://www.sfu.ca/~vdabbagh/Zadeh_08.pdf

It went out of fashion probably because it was overhyped in the 90s and early 2000s, and got replaced with another hype. Something notable too is that it has been consistently misunderstood and misused. As I mentioned, the common examples of fuzzy logic back in the days were often regulation systems - so that was all cute, because you could suddenly use a set of understandable rules to solve a given problem, instead of resorting to a formula with derivatives and integrals, but did not necessarily provide a lot of benefits compared to just using PIDs.

As to comparing the resources needed for a given task using various approaches, that's an interesting question. There may be papers about that, although a fair comparison may not be easy. You probably need to dig that up.
« Last Edit: November 18, 2021, 06:01:47 pm by SiliconWizard »
 
The following users thanked this post: SuzyC

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #72 on: November 18, 2021, 06:10:37 pm »
Suppose I wanted the fewest cost, lowest size of hardware to solve a "simple" problem, like a device that  could perform satisfactorily to recognize a few command words.

How robust do you want it?  I've seen some research, for example, to try and see how well a dog actually understands human language.  The primary result was that they essentially hear the sound of the first syllable and just count syllables after that.  If you're satisfied with that, then maybe record the first peak, and do an FFT on the recording while you count subsequent peaks.  By the time the entire command is done, the FFT might also be done, and you can compare it to a lookup table of statistics for each command, plus the exact count.  The learning part is to build that lookup table of statistics.

I have also seen working examples of fuzzy logic  used with a 8-bit microcontroller that successfully learns to balance a  double-pendulum.

The way I'd do that is to fix the system to a mathematical model with a few unknowns, so that the learning part is only to fill in those unknowns.  The system is limited to that task, but it's much easier to make than it would be for a general purpose thing that then learns this.

Why is fuzzy logic no longer in fashion to create intelligent devices?

That, I don't know.  My only exposure to something that was called "Fuzzy Logic" was in industrial controls.  The vendor's new firmware introduced another "black box" module that they called "Fuzzy Logic".  Essentially, you would configure it for N input variables and M output variables, and then enter an N-dimensional lookup table of M output values at each position, that are based on a small handful of inputs that are easy to characterize.  (hot/cold, empty/low/high/full, etc.)  It then did a linear interpolation of that lookup table based on the actual input values from the physical process.

No actual learning at all in that system.  You gave it some strategic answers, and it drew a bunch of straight lines in between.  If a straight line didn't work, you'd add another data point with a predetermined answer.

I remember thinking at the time that that's not real FL.  It felt more like a marketing buzzword to make an electrician-turned-programmer feel fancy.  It has some similarities, but it's not the real thing.
 
The following users thanked this post: SuzyC

Offline SuzyC

  • Frequent Contributor
  • **
  • Posts: 792
Re: Machine Learning Algorithms
« Reply #73 on: November 18, 2021, 06:14:05 pm »
SiliconWizard, thanks for those links to get to better understand my questions about fuzzy logic!

But the second question remains  unanswered. What would be the minimal hardware required to obtain the same results?


What are the minimal components required to implement a NN or ML system to solve the example problems?

Is it that NN and ML are only used by themselves in the realm of AI applications and a NN ML system is by fashion not allowed to be integrated with fuzzy logic to solve problems?
« Last Edit: November 18, 2021, 06:25:42 pm by SuzyC »
 

Offline SuzyC

  • Frequent Contributor
  • **
  • Posts: 792
Re: Machine Learning Algorithms
« Reply #74 on: November 18, 2021, 06:21:16 pm »
Thanks AaronD,

From your posting I get the idea that judicial sentencing and medical diagnostic work could also be done using fuzzy logic..if it was in style.

Referring to the example you  posted, what part of "real" fuzzy logic was neglected, not used in control implementations?
« Last Edit: November 18, 2021, 06:24:48 pm by SuzyC »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #75 on: November 18, 2021, 06:24:36 pm »
From your posting I get the idea that judicial sentencing and medical diagnostic work could also be done using fuzzy logic..if it was in style.

https://www.researchgate.net/publication/308823268_Medical_diagnosis_system_using_fuzzy_logic_toolbox
https://pubmed.ncbi.nlm.nih.gov/29852957/
 
The following users thanked this post: SuzyC

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #76 on: November 18, 2021, 06:41:31 pm »
What would be the minimal hardware required to obtain the same results.

What is the minimal components required to implement a NN or ML to solve the example problems?

Minimum hardware depends not on the algorithm, but on the amount of data that you need to push through it.  If you need an answer in a handful of milliseconds to whether a given 4K image contains a stop sign, then you need something pretty beefy.  (or do this: https://xkcd.com/1897/)  If you're okay to wait a few seconds to select from 10 available commands based on about 3 seconds of 8-bit audio at 8kHz sample rate, then you could probably get away with an 8-bit microcontroller (with 3s * 8kHz samples * 1B/sample = 24kB of RAM, plus processing headroom, which eliminates most of them but not all), using only that one chip to capture the analog signal and then do all the processing on it.

Referring to the example you  posted, what part of "real" fuzzy logic was neglected, not used in control implementations?

It was a multi-dimensional lookup table.  Nothing more.  Fuzzy Logic applies boolean logic to non-boolean inputs, using a modified form of boolean expressions like AND, OR, NOT, etc., so that the human-designed rules still make sense in the boolean way of thinking.  For example, "IF [[tank_level IS midrange] AND [temperature IS warm]] THEN stir"  But that's not what this was.  The thought process for the version that I saw was entirely analog, except for the sparse sampling, and potentially a threshold comparison at the end to control an on/off device; it only happened to use a digital system to process it.
« Last Edit: November 18, 2021, 06:47:13 pm by AaronD »
 
The following users thanked this post: SuzyC

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #77 on: November 18, 2021, 10:46:11 pm »
Also lack of context understanding.

I like to refer to an example where a supposedly state-of-art image recognition "AI" recognizes objects on a street, draw the bounding box in real time and puts the label next to it. The typical demo.

But then, the rectangular thing attached on the outside of a house at the end of the driveway, which we humans call "garage door", is misidentified as "flat screen TV", and indeed, if you just cropped that part of an image, a human could make the same mistake - it's just a rectangle with little or no details. What makes it a garage door, is the context around it. You don't buy a 300-inch flat screen TV, and you don't mount it outside your house, on ground level, at the end of your driveway. This is all obvious to a human.

Context is tricky.

https://twitter.com/FSD_in_6m/status/1400207129479352323

 
The following users thanked this post: Siwastaja

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #78 on: November 19, 2021, 08:52:01 am »
And THAT one is the example of a situation where you can't get the required 10000 pieces of training data to find the right correlations. This is less than once-in-a-lifetime occurrence. Even if you "learn in the cloud", i.e., combine all the data worldwide to be able to include those few dozen cases where a truck full of traffic lights drives in front of you, you are not going to get the NN to learn what is the correct way of reacting to this situation.

For a human, this is obvious. Because human learning is not based on making simple correlation coefficients. It has to work another way because a single human can't access vast amount of data for learning. A human kid sees a picture of a badly drawn cat in a book and then recognizes the cat, drawn or real, in very different scenarios. A NN requires a 1000-page book full of pictures: "this is a cat. This is also a cat. This isn't a cat. This isn't either, but this is."

The learning mechanisms are clearly pretty bad. Huge amount of data can be used to compensate, but that only works when huge amount of data is available. This misses all the corner cases, by the very definition of corner case!

This is also why NNs are great for classification tasks where making mistakes in corner cases doesn't matter. For example, waste recycling detection mentioned earlier, because waste stream processes are robust against small amounts of wrong types.
« Last Edit: November 19, 2021, 08:54:07 am by Siwastaja »
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7388
  • Country: nl
  • Current job: ATEX product design
Re: Machine Learning Algorithms
« Reply #79 on: November 19, 2021, 10:28:15 am »
To answer the original question: Pytorch, scipy, opencv. Pytorch for custom ML stuff, anything written more than 1-2 year ago is obsolete. The entire ML became so much easier with it, and the coding part is just very straightforward. I had ML algorithm working and trained in about a day last time I tried, while with Keras, Tensorflow and others is was mayor PITA to get it going.
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Machine Learning Algorithms
« Reply #80 on: November 19, 2021, 04:15:32 pm »
A human kid sees a picture of a badly drawn cat in a book and then recognizes the cat, drawn or real, in very different scenarios.

Or a kid draws a boa and everyone else recognizes it as a hat.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #81 on: November 19, 2021, 04:33:28 pm »
And THAT one is the example of a situation where you can't get the required 10000 pieces of training data to find the right correlations. This is less than once-in-a-lifetime occurrence. Even if you "learn in the cloud", i.e., combine all the data worldwide to be able to include those few dozen cases where a truck full of traffic lights drives in front of you, you are not going to get the NN to learn what is the correct way of reacting to this situation.

For a human, this is obvious. Because human learning is not based on making simple correlation coefficients. It has to work another way because a single human can't access vast amount of data for learning. A human kid sees a picture of a badly drawn cat in a book and then recognizes the cat, drawn or real, in very different scenarios. A NN requires a 1000-page book full of pictures: "this is a cat. This is also a cat. This isn't a cat. This isn't either, but this is."

The learning mechanisms are clearly pretty bad. Huge amount of data can be used to compensate, but that only works when huge amount of data is available. This misses all the corner cases, by the very definition of corner case!

This is also why NNs are great for classification tasks where making mistakes in corner cases doesn't matter. For example, waste recycling detection mentioned earlier, because waste stream processes are robust against small amounts of wrong types.

I think the talk about fads in ML is more significant than meets the eye.  The entire field is so new, promising, and exciting that we can't slow down and do it right.  Old techniques that may have failed because of the processing power at the time, are not revisited, nor do we spend the time anyway to actually train it well.  (remember how long humans take, starting from blank at birth...)  The amazingly long training times and amount of data required, clashes with our excitement, and so we move on to something else.

It simply takes time, with lots of relevant experiences, to build what we call "common sense".  Humans that don't have those experiences, don't have the sense either; and a correctly-built learning machine that does, can.  But like I said in the previous paragraph, there's no substitute for the long way around, and we're not going to take the long way around as long as the funding is based on excitement.



In the case of traffic lights on a truck, you might think of adding a bunch of rules like, "Traffic lights are only valid when at least one of them is lit up, and when they're not moving a significant distance, but swinging in the breeze while on is still valid, etc.", but you very quickly get into an unworkable mountain of arbitrary rules that is practically impossible for even humans to learn.  So why do we expect a machine to do it?

(I'm related to someone that does that.  He says he doesn't understand people, so he follows rules instead, built and refined over 50+ years of essentially trial-and-error.  He still makes frequent similar mistakes like a machine does, complete with high confidence in a terrible answer.  He can do things on his own just fine, even got a Ph.D. in a highly technical field and made his career there, but personal interactions are still painful, and he doesn't seem to have a true "engineering mind" that works to at least some degree across all disciplines.  It's very much based on direct experience alone.  So if even a human can't make it all work with direct rules alone, there must be something else that a machine must also include in order to get it right.)

I think the case of traffic lights on a truck, and many others, would greatly benefit from a "sense of intent", or, "What is the intended purpose of this scenario?"  Is this traffic light intended to control traffic?  Or is it simply being transported?  How would it actually work to have a valid traffic light on a moving truck, and do the required rules for that make sense?  (automatic proof by contradiction)  Lots of common sense involved with that, which comes with the requirements above, but CS is not the whole story either.

Build a machine that has a section for others' intent, include that as part of the learning, and use it to influence decisions; and see what it comes up with...
 
The following users thanked this post: Marco, SiliconWizard

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Machine Learning Algorithms
« Reply #82 on: November 19, 2021, 08:11:17 pm »
In the case of traffic lights on a truck, you might think of adding a bunch of rules like ...

I don't think imposing rules on a machine gives it any intelligence. Rather you use your own intelligence to design the set of rules which are then followed by the machine.

For example, I have a chip programming machine. It has a camera and needs to detect if the chip is present, and if the chip is there it needs to detect its position. I looked at various pictures taken with the camera. Then I designed a small set of rules. Then I wrote the program to calculate the rules. The program does this very quickly and never makes any mistakes. It is silly to believe that it has any intelligence. The intelligence is all mine.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #83 on: November 19, 2021, 08:41:49 pm »
In the case of traffic lights on a truck, you might think of adding a bunch of rules like ...

I don't think imposing rules on a machine gives it any intelligence. Rather you use your own intelligence to design the set of rules which are then followed by the machine.

For example, I have a chip programming machine. It has a camera and needs to detect if the chip is present, and if the chip is there it needs to detect its position. I looked at various pictures taken with the camera. Then I designed a small set of rules. Then I wrote the program to calculate the rules. The program does this very quickly and never makes any mistakes. It is silly to believe that it has any intelligence. The intelligence is all mine.

What IS "intelligence" anyway?  That's a surprisingly hard question to answer, without introducing a bunch of unnecessary restrictions by definition.

And the part that you quoted was a strawman argument, used to make the point that followed it.  :)
 

Offline cdev

  • Super Contributor
  • ***
  • !
  • Posts: 7350
  • Country: 00
Re: Machine Learning Algorithms
« Reply #84 on: November 19, 2021, 09:08:31 pm »
Governments want this so badly, (it has the potential to save business so much money) that they will give companies immunity from liability when they screw up, just to give them an edge in using it sooner. After all, they are where the money comes from, right?

And one problem as I said and tggzzz pointed out is that it's impossible to get a formal proof that a given trained NN will behave in the way we expect it to. We can only test, test, test until we get a statistically signficant result that meets our requirements, and it's never 100%. Thing is, what happens for the few % cases in which it fails is unknown (and can be a big risk in any critical application), and *why* it performs as expected is actually also unknown.

Our inability to prove correctness of trained NNs is a major issue, that bites, and will bite us for years to come. Worse yet, analyzing why a trained NN fails for some inputs is also almost impossible. Thus using them in any safety-critical application is a serious problem.

It is even worse than that :( You have no idea of how close you are to the boundary where they stop working. There are many examples of making trivial (even invisible) changes to pictures, and the classifier completely misclassifies the image.

Yes, exactly. Consider that companies also love to have an escape from blame that basically is always available to them, which is what is coming.
"What the large print giveth, the small print taketh away."
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Machine Learning Algorithms
« Reply #85 on: November 20, 2021, 12:48:55 am »
What IS "intelligence" anyway?

"the ability to acquire and apply knowledge and skills" the dictionary says.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #86 on: November 20, 2021, 04:59:42 pm »
But if the machine replicates NorthGuy's intelligence, then you could say the machine itself is intelligent?

And, if NorthGuy did the job well, then what's the problem?

In this regard, I believe well-programmed fixed algorithms, or "expert systems", are a much better idea than forcing general-purpose NN everywhere and hoping throwing petabytes of data at it somehow automagically solves all the problems.

Also, I believe giving the machine super-human capabilities when you can, instead of trying to replicate human, weaknesses included. For example, regarding self-driving automobiles, you can measure distance using laser beams, this is obvious advantage over human vision. Yet Tesla says they don't want that when they can route standard human-like camera vision into neural network with the complexity of ant's brain and hope it makes some sense.
« Last Edit: November 20, 2021, 05:02:52 pm by Siwastaja »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #87 on: November 20, 2021, 05:45:32 pm »
Yes, exactly. Consider that companies also love to have an escape from blame that basically is always available to them, which is what is coming.

Yep, this is one point I also mentioned and that I think is key here. This artifical "intelligence" has the extraordinary "power" of helping companies (and ultimately, governments) get rid of any kind of liability. How could they not force the movement at all costs?

The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
 

Online NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9018
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Machine Learning Algorithms
« Reply #88 on: November 21, 2021, 03:40:46 pm »
The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #89 on: November 21, 2021, 04:36:40 pm »
The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)

In theory the pilots always hare authority and responsibility in law.

In practice it is common for the entire flight deck crew to be asleep on long haul flights. They might not notice the autopilot has made a "poor" decision.
In practice sometimes the autopilot overrides the pilots: witness the 737 MAX accidents and AF447, where neither pilot realised P2's control inputs were being ignored.

Now those are highly trained people operating in a relatively well understood and constrained environment. Many of the new ML systems will have untrained operators in complex environments. What could possibly go wrong?
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #90 on: November 21, 2021, 06:18:53 pm »
The autonomous car example is telling. In case the autonomous system fails and yields an accident, the driver will be liable! Because the machine itself can't of course be liable of anything, and since it's not provable, the company selling it can't be either (which is utterly twisted of course.) The really fun part is that proponents of this will claim how much more reliable AI is compared to humans on the road, yet if anything goes wrong, the driver is supposed to be the one supervising the machine at all times and will be liable. And of course all this is perfectly consistent. :-DD
Isn't it that if an aircraft crashes because the autopilot malfunctioned, the pilots are at fault for not noticing and taking action? (In one case, the autopilot disengaged for some reason but the warning buzzer wasn't loud enough to stand out from the background noise, while the pilots were troubleshooting some other problem with the aircraft.)

Looks like you missed the point - at least you haven't given this a lot of thought.

A few things:
- I pointed out the patent inconsistency of CLAIMING that AI systems are much safer than any human could be, while ultimately expecting the human to make up for any mishap of the automated system. That is just twisted.
- I would have a lot fewer concerns overall if companies promoting and selling stuff with AI systems were ENTIRELY liable in case of a mishap. That'd be a game changer for sure.
- Pilots in aircrafts are not a very good parallel - ultimately, the "pilot in command" is responsible for anything that happens in the aircraft, not just any pilot (copilots are not). This has strict legal implications and is quite different from the case of an individual driver in a car.
- Conventional autopilots are predictable (at least for the most part ;D ). Sometimes things can go wrong, due for instance to sensor failure not well handled in software, but most often, when a sensor fails, the autopilot will disengage itself first thing. The exceptions mentioned by tggzzz are actually not "autopilot" failures per se, but extra flight systems that are supposed to keep the plane safe. Not that it fundamentally makes a big difference, just that those systems are "sneakier" than autopilots which can just be disabled upon the press of a button. Possibly a parallel in a car would be, for instance, ABS failure, rather than a failure of those AI-based "autopilots".
- Even so, there already are cases with existing systems, which are not AI-based (like the MCAS debacle). But as a few of us are trying to explain in this thread, the difference is that it was in the end relatively straightforward to understand where the problem came from, what happened and how to fix it. Because the sytems in question were analyzable. And Boeing got the consequences. Imagine the same issue with Boeing's MCAS, but this time the MCAS was entirely AI-based, and no one could for sure pinpoint the issue after the accidents.

"Interestingly", Elon Musk is perfectly aware of those issues with AI and has been saying things about it that are quite similar to what tggzzz, I, and a few others are saying here. His main point for actively *using* AI in his products is to become proactive rather than being passive and letting others do it anyway. He's been a proponent of *regulating* AI in a strict way. Problem though, nothing much is really happening yet in that area, and he's still actively promoting AI, while - at least as far as I know - having not done much for the regulation part (like actively working with politics) apart from a few talks. I get his point of being proactive rather than letting others do it anyway, but as it is, whatever his concerns are, it's not helping much and doesn't look liike much more than just cute marketing talk to make him look like the "good guy".
« Last Edit: November 21, 2021, 06:20:40 pm by SiliconWizard »
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #91 on: November 21, 2021, 11:52:32 pm »
In practice it is common for the entire flight deck crew to be asleep on long haul flights.

That *has* happened but it should never happen. It's certainly not COMMON.

Flights long enough for this to be any sort of problem have multiple crews onboard and they go to actual bunks to sleep.
 
The following users thanked this post: NiHaoMike

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #92 on: November 21, 2021, 11:58:35 pm »
In practice it is common for the entire flight deck crew to be asleep on long haul flights.

That *has* happened but it should never happen. It's certainly not COMMON.

Flights long enough for this to be any sort of problem have multiple crews onboard and they go to actual bunks to sleep.

Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #93 on: November 22, 2021, 06:34:14 pm »
Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!

IMHO, #1 root cause in that one as well is still lack of basic skills and training of those basic skills. It's again the classic "oh, we are falling from the sky, I have no idea what to do, maybe pull the nose up so we go higher?!?" Yes, everything else, like sensor failures and fatigue, are contributing factors and adds to the confusion but understanding basics solidly is the key here. This is like forgetting Ohm's law, trying to figure out if increase in resistor value increases or decreases current for a minute and simply not being able to make it, but instead of such basics, they know how configure a project in CubeMX, equivalent to dealing with all those flight deck computers to get the plane airborne and to the destination, without an idea what's actually happening.

Solution? Similarly to doing enough PIC/AVR projects bare metal is a good starting point, learning to fly on a small aircraft with no autopilots whatsoever would be a good starting point to get enough grasp of basics such as what stall means and how to recover from it. Seemingly this isn't obvious at all to commercial airline pilots. It should be.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #94 on: November 22, 2021, 07:06:07 pm »
Yes, and one "output" of this accident is that they drastically improved the training regarding handling stalls on airliners.

Thing is, related to this whole discussion: the more AI we're gonna use, and the less trained people will be. Training has a cost. Ultimately, one of the whole points of automation is to lower COSTS. The part in automation that has been used solely to improve safety has pretty much already been there for a while. The next step is not to provide tools to help people and get better safety: it's to get rid of people altogether. They are absolutely all claiming - Musk included, even though he pretends to be wary of AI - that the future of transportation is fully autonomous. Nothing else.
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #95 on: November 22, 2021, 07:26:20 pm »
Yes, and one "output" of this accident is that they drastically improved the training regarding handling stalls on airliners.

Thing is, related to this whole discussion: the more AI we're gonna use, and the less trained people will be. Training has a cost. Ultimately, one of the whole points of automation is to lower COSTS. The part in automation that has been used solely to improve safety has pretty much already been there for a while. The next step is not to provide tools to help people and get better safety: it's to get rid of people altogether. They are absolutely all claiming - Musk included, even though he pretends to be wary of AI - that the future of transportation is fully autonomous. Nothing else.

Exactly.  We've improved human safety to the point of reaching its limit.  Any improvement now is to remove humans from the equation.  Once THAT happens, THEN we can realize the exciting benefits.

Someone in this thread mentioned "constrained environments".  Removing humans will go a long way in enforcing that constraint, thus making automation much safer.  If we're just passengers, not controllers, then a car doesn't have to worry about the idiot that's about to T-bone it.

I've seen a comment elsewhere that I agree with as well, that says that we won't have flying cars until we first have fully autonomous cars.  If the general public is inherently this bad at driving in 2D, then we certainly can't allow them to drive in 3D!
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #96 on: November 22, 2021, 07:47:05 pm »
There is something to be gained if automation is to be applied carefully:

Right now a lot of piloting tasks are to manage stone-age automation. Given certain amount of money and time, this already limits the training of important basics, have been this way for three decades already.

By using more advanced, modern automation, stupid "program the computer" tasks can be reduced to almost zero, freeing resources into training of basics, and during flights, freeing the attention from the "stupid computer" into what's actually important, namely air speed, altitude and artificial horizon. Many accidents have been caused by lack of focus on these due to fighting with automation; sometimes "fight" means just standard procedures being a battle.

But, such good automation does not need neural networks or similar "AI" things. It requires classic understanding of simple computer algorithms, simulations and testbenching them, and UI/UX specialists.
 
The following users thanked this post: AaronD

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #97 on: November 22, 2021, 08:06:55 pm »
We've improved human safety to the point of reaching its limit.  Any improvement now is to remove humans from the equation. 

Possibly. Rubbish - unless you can prove that contention - and we need stronger proof than is normal in the ML fraternity.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #98 on: November 22, 2021, 08:21:39 pm »
We've improved human safety to the point of reaching its limit.  Any improvement now is to remove humans from the equation. 

Possibly. Rubbish - unless you can prove that contention - and we need stronger proof than is normal in the ML fraternity.

Not just that, but it's interesting to see claims of promoting AI to improve "human safety", while I claim the main reason by far is to lower costs.
Generally speaking, one must understand what "getting humans out of the equation" implies.
We can indeed get out of the equation for good, and there'll be nothing much to talk about anymore.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #99 on: November 22, 2021, 08:44:45 pm »
Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!

IMHO, #1 root cause in that one as well is still lack of basic skills and training of those basic skills. It's again the classic "oh, we are falling from the sky, I have no idea what to do, maybe pull the nose up so we go higher?!?"

As a pilot myself I agree completely. I find it absolutely incredible that an international airline pilot can be so lacking in basic flying skills. Have they turned completely into button pushers?

Quote
Solution? Similarly to doing enough PIC/AVR projects bare metal is a good starting point, learning to fly on a small aircraft with no autopilots whatsoever would be a good starting point to get enough grasp of basics such as what stall means and how to recover from it. Seemingly this isn't obvious at all to commercial airline pilots. It should be.

I think training in a standard small aircraft may be inadequate. In general they do very little stall training and these days often no spin training at all. When they actually do stall training, they are trained to initiate recovery in response to the stall warning horn sounding -- which generally happens around 5 knots faster than the actual stall, when the aircraft is actually still flying normally.

Training in gliders (sailplanes in the US) is much more stall intensive. There is no warning horn -- you have to learn to recognise the aerodynamic symptoms of stall yourself. And while Cessna pilots spend almost all their time boring holes in the sky and twice or more the stall speed, glider pilots spend a lot of time flying in maximum performance circles in thermals at just above the stall speed (or accelerated stall speed due to the steep bank angle and higher G load often used). As thermals are gusty, you often get actual stalls and it becomes absolutely ingrained what that feels like and you automatically make the required recovery action (easing the stick forward until it stops).

Here's my most popular gliding video, with currently 84960 views:


 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #100 on: November 22, 2021, 09:22:19 pm »
Absolutely.
In the AF447 case, the PIC was asleep when things started to get problematic, but there was a copilot in his seat. Funnily enough, in this particular case, would the copilot have been asleep instead, the crash would probably never have happened. But that's just one particular case!

IMHO, #1 root cause in that one as well is still lack of basic skills and training of those basic skills. It's again the classic "oh, we are falling from the sky, I have no idea what to do, maybe pull the nose up so we go higher?!?"

As a pilot myself I agree completely. I find it absolutely incredible that an international airline pilot can be so lacking in basic flying skills. Have they turned completely into button pushers?

There is that tendency, allegedly.

Quote
I think training in a standard small aircraft may be inadequate. In general they do very little stall training and these days often no spin training at all. When they actually do stall training, they are trained to initiate recovery in response to the stall warning horn sounding -- which generally happens around 5 knots faster than the actual stall, when the aircraft is actually still flying normally.

Before I went solo in a glider, I had to do a complete flight with the instruments covered up. Why? Because they all lie to you, and you have to recognise it happening.

Deliberately entering a spin at 1000ft or to lose altitude fast is entertaining - and anathema to powered pilots

Quote
Training in gliders (sailplanes in the US) is much more stall intensive. There is no warning horn -- you have to learn to recognise the aerodynamic symptoms of stall yourself. And while Cessna pilots spend almost all their time boring holes in the sky and twice or more the stall speed, glider pilots spend a lot of time flying in maximum performance circles in thermals at just above the stall speed (or accelerated stall speed due to the steep bank angle and higher G load often used). As thermals are gusty, you often get actual stalls and it becomes absolutely ingrained what that feels like and you automatically make the required recovery action (easing the stick forward until it stops).

Full opposite rudder and after rotation stops centralise rudder and stick forward ( not for too long :) l.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #101 on: November 22, 2021, 10:29:57 pm »
Any airliner pilot has flown small aircrafts AFAIK. That's not the problem. They are all supposed to have more than basic flying skills.

But an airliner is a very different beast. Sure the same skills would apply, but they have a lot more inertia, and of course as we said, a lot of automation, making them all but as direct as a small aircraft. Besides, the pilots are taught NOT TO fly them as they would  a small aircraft. They are taught exactly that. They are taught to use all the automation they can without questioning it.

There's also of course some "exceptional situation" training, but it's unfortunately less and less (as we have seen in the AF447 case), for time and cost reasons. But also maybe because neither the companies making airliners not the airline companies themselves want pilots to ever question automation. It's kinda linked to the very discussion we're having here: they want to convey the idea to pilots that it's statistically much safer to just follow automation than to try and correct it manually.

And we need bad crashes to temporarily change that direction. Then it ends up back to square one and runs in circles.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #102 on: November 22, 2021, 11:07:18 pm »
Before I went solo in a glider, I had to do a complete flight with the instruments covered up. Why? Because they all lie to you, and you have to recognise it happening.

Yes, that's standard practise. You can judge your airspeed pretty well by the control responsiveness, level of wind noise, and (in stablised flight) the nose attitude relative to the horizon. It's not actually hard to fly like that, at least if you have an actual airfield to land on and don't need to do a precision short landing. Trying to land in a random 100 to 200 meter long paddock without a working airspeed indicator would be dodgy.

Quote
Deliberately entering a spin at 1000ft or to lose altitude fast is entertaining - and anathema to powered pilots

You can get away with spin entry at 1000 ft in older slow lightweight docile metal or wood and fabric gliders such as Ka7 / ASK13 / Blanik. I've done it. I don't advise it in a modern fiberglass high performance trainer such as the DG1000 my club has been using since 2007 (which the above video was made in).

Quote
Quote
As thermals are gusty, you often get actual stalls and it becomes absolutely ingrained what that feels like and you automatically make the required recovery action (easing the stick forward until it stops).

Full opposite rudder and after rotation stops centralise rudder and stick forward ( not for too long :) l.

That's only for a fully-developed spin. The point of stall avoidance training is to recognise the start of a stall and to not allow it to develop into a spin. Before autorotation has started, all that is necessary is a momentary relaxation of the back-pressure on the stick.

In many gliders when turning tight in a thermal you can actually hear the airflow start to break away near the wing roots when a gust hits you or if you're pulling a little too hard. It's a bit like the sound those modified turbo cars with noisy blow-off valves make, but less dramatic.
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #103 on: November 22, 2021, 11:22:41 pm »
The problem with aircraft is that they don't have enough automation.  Consider the number of accidents caused by plugged pitot tubes.  The pitot tube heater is almost always a manual operation.  If the tube freezes, likely at altitude, the aircraft doesn't know airspeed and assumes a nose down attitude to increase speed.  Sometimes, the dirt gets in the way.

Why isn't the pitot tube heat automatic?  Either measure outdoor temperature and determine when to apply heat or just turn the thing on and leave it alone.

https://www.avweb.com/flight-safety/risk-management/pitot-static-system-failure/

You would think the pilots would 'know' that they are nose down.  The attitude indicator should show that.  Now you have conflicting information:  The attitude indicator says you are nose down and the airspeed indicator says you should be nose down but, in fact, you have plenty of air speed for level flight.  Just like you did a few minutes before the pitot tube froze up.  This should be pretty easy to simulate for training.

 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #104 on: November 22, 2021, 11:58:33 pm »
Pitot tube heating is already automatic on most modern aircrafts, actually. The auto mode is on by default and pilots can manually switch that to off for whatever reason would be required (following usually strict procedures.) The problem is that many aircrafts still in circulation are not modern.

As far as I've read - the older Boeing 737 models may not have an auto mode, but I believe the 737 MAX does, and most recent Boeing models, such as the 787. Most Airbus in circulation do have an auto mode too.

As far as I've understood, the AF447 issue was not that the heaters were not ON, but that the pitots got clogged anyway - which apparently can happen in very severe conditions, due to the quantity of small ice balls that can enter the tubes, so that it can get clogged temporarily. IIRC, in the AF447 case, the pitot eventually got rid of the ice, but that was too late. Airbus did have their pitot modified after this on this model, to limit the possibility of this happening, but I don't believe it was at all due to the pilots having forgotten to switch heaters on.

As to pilots knowing what's happening or not: the problem is that at some point, they do NOT know what is happening, because they realize they can't trust automation/or instruments, but also they are in a situation where they can't trust their own perception of things and their basic flying skills - precisely because a big airliner is pretty different from a small aircraft in terms of sensations, and because, as I said earlier, not being able to trust the plane's automated systems and instruments is in itself a big cognitive dissonnance for modern pilots.

Proper training should help of course, but as I said, training tends to be insufficient these days, and mostly on simulators, which only partially reproduces the physical movements/accelerations, and without the stress factor.
« Last Edit: November 23, 2021, 12:01:04 am by SiliconWizard »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #105 on: November 23, 2021, 12:07:48 am »
The problem with aircraft is that they don't have enough automation.  Consider the number of accidents caused by plugged pitot tubes.  The pitot tube heater is almost always a manual operation. 

What is this heater of which you speak? :) (Yes, gliders can fly higher than commercial airliners)

I've had a clogged pitot tube. It manifested itself during winch launch (i.e. at a critical time) and lasted until the aircraft came to rest. I noted the apparently low launch speed to the instructor, noted that we were climbing acceptably and controllably, and kept the nose lower than normal.  While flying I kept the nose lower than normal and ensured the aircraft was fully responsive to the controls.

It turned out not to be the traditional insect, but a flap of rubber.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #106 on: November 23, 2021, 12:11:53 am »
The problem with aircraft is that they don't have enough automation.  Consider the number of accidents caused by plugged pitot tubes.  The pitot tube heater is almost always a manual operation.  If the tube freezes, likely at altitude, the aircraft doesn't know airspeed and assumes a nose down attitude to increase speed.

The *aircraft* won't do that. The aircraft will continue to fly at its trimmed angle of attack, and therefore EAS, and constant rate of climb or descent (possibly zero) depending on thrust setting.

Only a confused pilot or confused autopilot will manipulate the controls to produce the dive you describe.

Also, autopilots don't adjust speed by adjusting attitude. They are designed to operate in a relatively high speed environment, far from the stall, where adjusting rate of climb by using the elevator and adjusting speed by using the throttle and/or drag devices works. They work under the assumption that any commanded climb or descent rate will be within the capabilities of the engine or drag devices to maintain the speed. Autopilots in older or smaller aircraft don't have control of the throttle at all and work purely via the aerodynamic controls (elevator and ailerons, primarily). At the most they will automatically turn the autopilot off if the speed drops below some fairly high number e.g. 99 knots for the Garmin G1000 in the Quest Kodiak seen in the popular "Missionary Bush Pilot" youtube channel.

Autopilots are not designed or intended to replace a pilot, but only to reduce the pilot's workload. The pilot is still in charge and responsible for monitoring the progress of the flight and making adjustments or taking over control as required.

Which, incidentally, is exactly the same as Tesla's "autopilot" feature, or indeed the dynamic cruise control and automatic lane centering available (at least as an extra cost option, but often standard) on virtually every new car for sale today.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #107 on: November 23, 2021, 12:27:28 am »
The problem with aircraft is that they don't have enough automation.  Consider the number of accidents caused by plugged pitot tubes.  The pitot tube heater is almost always a manual operation. 

What is this heater of which you speak? :) (Yes, gliders can fly higher than commercial airliners)

Only occasionally.

The glider flight below was higher than Concorde or U2, but not quite as high as the SR71 -- they almost certainly could have done it, as they were climbing strongly when they broke off the flight, but they have a protocol that they only allow each flight to go 10,000 feet higher than the previous highest flight and then analyse the data on the ground before the next flight.

Some of their flights:

52,172 feet, 3 September 2017
60,669 feet, 26 August 2018
63,776 feet, 28 August 2018
74,298 feet, 2 September 2018

Weather conditions were not suitable to further increase this on the 2019 expedition to southern Argentina, and COVID has prevented expeditions in 2020 and 2021.


 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #108 on: November 23, 2021, 01:33:51 am »
The problem with aircraft is that they don't have enough automation.  Consider the number of accidents caused by plugged pitot tubes.  The pitot tube heater is almost always a manual operation. 

What is this heater of which you speak? :)

The pitot tube on most aircraft (other than pleasure craft?) have the tube electrically heated.

In the F-106 Flight Manual on page 35 (of the pdf) on the right side control panel #17 points to the Pitot Heat switch.  The photo is not very good but the switch is really there.

It's a manual operation...

https://www.usaf-sig.org/index.php/references/downloads/category/101-f-106-delta-dart-convair?download=335:t-o-1f-106a-1-flight-manual-f-106a-f-106b-01-12-1972

On pdf page 133 item #29, there is a preflight test of the pitot tube heat system but it is left off until page 138 Takeoff item #10.  Note that the pitot tube heat is turned on and left that way until landing.  After landing, all switches are turned off on page 176 #3

FWIW, the F-106 interceptor was intended to launch a Genie nuclear tipped rocket into a crowd of incoming bombers.  The escape maneuver is around page 160.  Just in case you were wondering...

https://en.wikipedia.org/wiki/AIR-2_Genie

AFAIK, the F-106 was the only jet to be designed strictly as an interceptor.  Although highly capable, it was not intended to be a fighter.

I apologize in advance for the size of the download but the F-106 is the ONLY aircraft that has ever caught my interest.  When I was a kid, they had family day at Convair and I got to see the aircraft under construction.  My father did the final electrical tests before the planes flew up to Edwards AFB for final outfitting.

The climb-out must have been interesting because there was no desire to crash into Marine Corp Recruit Depot San Diego (MCRD).  It is right at the end of the runway at Lindbergh Field.  I saw one takeoff many years later and that plane could definitely go vertical and turn.

Designed with slide rules!

« Last Edit: November 23, 2021, 02:53:55 am by rstofer »
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #109 on: November 23, 2021, 05:00:44 am »
AFAIK, the F-106 was the only jet to be designed strictly as an interceptor.  Although highly capable, it was not intended to be a fighter.

I'll see your F-106 and raise you the English Electric Lightning.

First flight was two years before the F-106, introduction to front line service was a few months after, and the lightning served with the RAF until 1988, while the F-106 was replaced by the F15 in the USAF starting in 1981, and the last units retired from the National Guard in 1988.

Sticking to Convair, I reckon the Hustler was a more impressive aircraft. Not quite the same rate of climb or top speed, but it had pretty good range at supersonic speeds, setting coast to coast and New York to Paris speed records (the first supersonic transatlantic crossing), and also a flight from Tokyo to London in 8 1/2 hours averaging 1510 km/h (it had to slow to subsonic for the five refuellings) which is still today a record for a flight of that distance.

Quote
I apologize in advance for the size of the download but the F-106 is the ONLY aircraft that has ever caught my interest.  When I was a kid, they had family day at Convair and I got to see the aircraft under construction.

Certainly it's understandable that the things you see as a kid are favourites.

Stuck on an island in the South Pacific Ocean, the most impressive thing around was the A4 Skyhawk, and occasional visits from allies' Tomcats, Harriers, and F-111s.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #110 on: November 23, 2021, 10:30:34 am »
The problem with aircraft is that they don't have enough automation.  Consider the number of accidents caused by plugged pitot tubes.  The pitot tube heater is almost always a manual operation. 

What is this heater of which you speak? :) (Yes, gliders can fly higher than commercial airliners)

Only occasionally.

The glider flight below was higher than Concorde or U2, but not quite as high as the SR71 -- they almost certainly could have done it, as they were climbing strongly when they broke off the flight, but they have a protocol that they only allow each flight to go 10,000 feet higher than the previous highest flight and then analyse the data on the ground before the next flight.

Indeed, but even the UK record is ~37kft, >10kft is common and >20kft isn't remarkable.

Bloody cold up there :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #111 on: November 23, 2021, 11:08:13 am »
Indeed, but even the UK record is ~37kft, >10kft is common and >20kft isn't remarkable.

Bloody cold up there :)

Indeed it it. I've been to ~20k feet myself, in a Club Libelle. Don't aspire to higher. And I was half my current age then.

The powerful airbrakes got a good workout on the way back down.

https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/Glasflugel_H-205_Club_Libelle_Glider.JPG/1920px-Glasflugel_H-205_Club_Libelle_Glider.JPG
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #112 on: November 23, 2021, 03:46:43 pm »
Stuck on an island in the South Pacific Ocean, the most impressive thing around was the A4 Skyhawk, and occasional visits from allies' Tomcats, Harriers, and F-111s.
Castle Air Museum (Atwater, California) has one of the British Vulcan fighter/bombers.  That thing is ginormous!

The Convair B-36 is an interesting bomber for the time.  The museum has a decommissioned Mk 17 thermonuclear bomb sitting alongside.  It is said that the bomb was found in the desert outside of Edwards AFB - not decommissioned.  I don't know about that but I have a great photo of my grandson kicking it.  I was teaching him all I know about EOD.  It's the brown bomb looking thing on the ground near the nose of the B-36.

Apparently, we're going to get a B-58 at some point.

The inventory:

https://www.castleairmuseum.org/collection

There's a lot of history in that museum.  They have a B-52 cockpit in the museum itself and a complete B-52 on the line.  It seems to be the first or second most popular aircraft in the running with an SR-71.  "Open Cockpit" days occur twice a year and some of the surviving pilots show up to tell the tales.  Our first stop is ALWAYS the F-106.  Then we look at the lesser craft.

No mention of the SR-71 can go by without the "LA Speed Check" video:

https://youtu.be/Lg73GKm7GgI

« Last Edit: November 23, 2021, 03:55:08 pm by rstofer »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #113 on: November 23, 2021, 06:47:22 pm »
No mention of the SR-71 can go by without the "LA Speed Check" video:

https://youtu.be/Lg73GKm7GgI

YES!!!  :-DD

Here's another good one:
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #114 on: November 23, 2021, 08:12:51 pm »
Stuck on an island in the South Pacific Ocean, the most impressive thing around was the A4 Skyhawk, and occasional visits from allies' Tomcats, Harriers, and F-111s.
Castle Air Museum (Atwater, California) has one of the British Vulcan fighter/bombers.  That thing is ginormous!

..and they did things that a heavy bomber really shouldn't be able to do.

Hearing it while driving a car, and seeing the full delta shape even though it was 10miles away and had just taken off.

Even on its last valedictory flight, watching it stand on a wing.

Barrel rolling a heavy bomber, FFS? If there was an ML system there, predict what would it do in such ridiculous circumstances?

https://www.bbc.co.uk/news/av/uk-england-lincolnshire-34712344 or
« Last Edit: November 23, 2021, 08:15:28 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #115 on: November 23, 2021, 09:38:04 pm »
The Convair B-36 is an interesting bomber for the time.https://youtu.be/Lg73GKm7GgI

I've seen a B-36J with both piston and jet engines, at the SAC museum near Omaha.  And the "goblin" parasitic fighter intended to be carried by it.

Crazy stuff.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #116 on: November 23, 2021, 09:45:33 pm »
Barrel rolling a heavy bomber, FFS? If there was an ML system there, predict what would it do in such ridiculous circumstances?

It's not a roll. It's just a wing-over, or "chandelle" as we glider pilots call them (not the same as what power pilots call a chandelle)

Doesn't require an aerobatics rating, we let students do them as there's nothing really bad can happen if you screw it up. We don't adhere to the "aerobatics is more than 60 degrees of bank or 30 degrees nose up or down" power flying definition -- 60 degrees bank is a standard thermalling turn in a glider.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #117 on: November 23, 2021, 10:09:54 pm »
Barrel rolling a heavy bomber, FFS? If there was an ML system there, predict what would it do in such ridiculous circumstances?

It's not a roll. It's just a wing-over, or "chandelle" as we glider pilots call them (not the same as what power pilots call a chandelle)

Irritatingly I have to agree with you.

Try this 1955 poor footage:


Quote
Doesn't require an aerobatics rating, we let students do them as there's nothing really bad can happen if you screw it up. We don't adhere to the "aerobatics is more than 60 degrees of bank or 30 degrees nose up or down" power flying definition -- 60 degrees bank is a standard thermalling turn in a glider.

Yes, it is quite fun to be pulling a few G with another glider at the same altitude as you on the opposite side of the thermal. To check relative position, you look upwards at the top of their head :)
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #118 on: December 28, 2021, 02:49:18 am »
So, there we go: https://www.scmp.com/news/china/science/article/3160997/chinese-scientists-develop-ai-prosecutor-can-press-its-own

And, still the same pressing question, which remains stubbornly unanswered so far:
Quote
“The accuracy of 97 per cent may be high from a technological point of view, but there will always be a chance of a mistake,” said the prosecutor, who requested not to be named because of the sensitivity of the issue. “Who will take responsibility when it happens? The prosecutor, the machine or the designer of the algorithm?”
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #119 on: December 28, 2021, 03:47:18 am »
So, there we go: https://www.scmp.com/news/china/science/article/3160997/chinese-scientists-develop-ai-prosecutor-can-press-its-own

And, still the same pressing question, which remains stubbornly unanswered so far:
Quote
“The accuracy of 97 per cent may be high from a technological point of view, but there will always be a chance of a mistake,” said the prosecutor, who requested not to be named because of the sensitivity of the issue. “Who will take responsibility when it happens? The prosecutor, the machine or the designer of the algorithm?”

If the machine gives better-quality results than a human, does the question NEED to be answered?  Maybe the more pressing question should be, "Why are we not punishing the humans today who make worse mistakes than that?"  For example, if a judge sentences someone to life in prison for a $5 robbery with no history, why is that judge still serving?
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11260
  • Country: us
    • Personal site
Re: Machine Learning Algorithms
« Reply #120 on: December 28, 2021, 04:13:12 am »
I personally don't mind computers helping us makes decisions. They do the same in many other areas.

The issue here is that people feel somewhat in control when it comes to human judges. They are elected (or assigned by elected people) and can be removed.

Computer algorithms are not in people's control to that extent. And this open up more possibility for the people in control to rig things. Even if changes in algorithm are somehow voted on, it would still make it too easy to hide the nasty stuff. Unless "algorithms"  are designed to be easily applied by the human. But this would make it an expert system, not AI. And this is not what people in power are going for in cases like this.
Alex
 

Offline Smokey

  • Super Contributor
  • ***
  • Posts: 2591
  • Country: us
  • Not An Expert
Re: Machine Learning Algorithms
« Reply #121 on: December 28, 2021, 04:19:30 am »
https://cs.stanford.edu/~jure/pubs/bail-qje17.pdf

I think that's the paper Malcolm Gladwell cited in his book "Talking to Strangers", that says a piece of software that only used facts about a criminal's history did a better job than face to face court judges at deciding who should get bail.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11260
  • Country: us
    • Personal site
Re: Machine Learning Algorithms
« Reply #122 on: December 28, 2021, 04:34:33 am »
I have no issues believing that. But again, this is not AI, all you need is an expert system that takes information on the subject and previous cases and their outcomes. This is something that can be automated even without computers. Just make judges follow a very specific algorithm with no personal input.

Or if personal input is allowed to certain extent, then the same should be applied to other similar cases.
« Last Edit: December 28, 2021, 04:37:59 am by ataradov »
Alex
 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 3146
  • Country: ca
Re: Machine Learning Algorithms
« Reply #123 on: December 28, 2021, 04:53:12 am »
Just make judges follow a very specific algorithm with no personal input.

This is actually what real judges try to do. The problem is that the real world cases may be difficult to reconcile against formal definitions.
 

Online ataradov

  • Super Contributor
  • ***
  • Posts: 11260
  • Country: us
    • Personal site
Re: Machine Learning Algorithms
« Reply #124 on: December 28, 2021, 05:01:45 am »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.
Alex
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #125 on: December 28, 2021, 09:25:52 am »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.

And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)

Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2007
  • Country: fi
Re: Machine Learning Algorithms
« Reply #126 on: December 28, 2021, 10:29:51 am »
Off Topic

I tried to find an old AI article where AI were possibly learning how to use D-flops.
The goal was a beep sound from a switch click or vice versa.
Some unorthodox ways were also present.

Maybe someone here can remember it.
Google is pretty useless.
Advance-Aneng-Appa-AVO-Beckman-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #127 on: December 28, 2021, 04:52:35 pm »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.

And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)

Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.

Yes, they do.  That's an unavoidable problem with training-based AI.  Any bias in the training data will be reflected in the results, and the training data is ALWAYS biased in some way or another.  But it *does* give the opportunity to step back, once it's running, and see in 3rd person what your biases were and probably still are.  Instead of carrying those biases forever because you think they're "just how the world works, deal with it" when only seen from that close, you have the opportunity to correct them instead, by stepping back to see them in the first place and then providing some counter-training to the AI.

And that's also the answer to why it generated a particular result.  That was the average of all the training data that it had to work with.  A handwritten digit decoder, for example, that was never given a blank, will always offer a number, even if it's later given a blank.  That's a simple example, but I think you can extrapolate it to see how hard it is to create a good set of training data.  Thus, anyone who practically worships an infallible machine, should themselves be removed from the process.  But the machine should stay and continue to be refined.

I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is.  We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer.  No difference there whatsoever.  When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves.  So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #128 on: December 28, 2021, 05:44:16 pm »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.

And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)

Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.

Yes, they do.  That's an unavoidable problem with training-based AI.  Any bias in the training data will be reflected in the results, and the training data is ALWAYS biased in some way or another.  But it *does* give the opportunity to step back, once it's running, and see in 3rd person what your biases were and probably still are.  Instead of carrying those biases forever because you think they're "just how the world works, deal with it" when only seen from that close, you have the opportunity to correct them instead, by stepping back to see them in the first place and then providing some counter-training to the AI.

Nice idea, but even supposedly intelligent people don't do that, unfortunately.

If the output matches someone's desires or objectives or prejudices, they won't want to look further.

Consider agile continuous integration software development. Frequently coders are happy that the unit tests give a green light, saying that means the code is working. That is nonsense of course, because it depends on the quality of the requirements and quality of the tests. For example, when asked which unit tests proved ACID properties, they look blank and can't conceive that unit tests cannot prove ACID properties.

Quote
And that's also the answer to why it generated a particular result.  That was the average of all the training data that it had to work with.  A handwritten digit decoder, for example, that was never given a blank, will always offer a number, even if it's later given a blank.  That's a simple example, but I think you can extrapolate it to see how hard it is to create a good set of training data.  Thus, anyone who practically worships an infallible machine, should themselves be removed from the process.  But the machine should stay and continue to be refined.

Firstly the result isn't the "average" of the input: there is no way of knowing how close the decision is to a breakpoint. There are many examples of single pixel changes in images causing the classification to be completely different.

Secondly, if you simply thow more examples into the pot, you will probably just get different false classifications.

Quote
I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is.  We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer.  No difference there whatsoever.  When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves.  So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.

There's a lot of "ifs" in there, which aren't justified.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #129 on: December 28, 2021, 06:20:13 pm »
Yes, all this is nice, but again, the question of liability remains stubbornly unanswered.
It can be freaking annoying when questions are asked and nobody cares to answer.
And yes, when humans are in a position of making important decisions, they ARE liable.

This definitely IS a pressing question, that anyone serious IS actually asking. Even, and I'd say, in particular, those that are actively using or working on AI systems! Just read the article. And many others. Even Musk, which uses AI every time he can, says that.

And no, current AI is absolutely NOTHING like human intelligence. The fact NNs are now the main tool used in machine learning seems to give the illusion somehow, at least to the uninformed. NNs are barely mimicking interacting neurons in a very simplistic way. NNs are a very cool tool for finding patterns in very large datasets, and they work rather well for that. Most of it is machine learning. AI is largely a misnomer, and whether it even actually qualifies as "intelligence" - supposing we can define, without resorting to circular logic, what it is - does, IMHO, not matter one bit. We have to stop with that fallacy. IMHO. It's just a nice tool, and we should treat it as any other tool.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #130 on: December 28, 2021, 06:53:24 pm »
Firstly the result isn't the "average" of the input: there is no way of knowing how close the decision is to a breakpoint. There are many examples of single pixel changes in images causing the classification to be completely different.

Secondly, if you simply thow more examples into the pot, you will probably just get different false classifications.

Yup. NNs are absolutely non-linear. Which makes analyzing them an intractable problem.
While the inputs of an "artificial neuron" are combined in a linear fashion, the output goes through an activation function to get any useful result out of them. This function is almost never linear.

Quote
I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is.  We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer.  No difference there whatsoever.  When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves.  So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.

There's a lot of "ifs" in there, which aren't justified.

Yeah. A lot of assertions that aren't backed by any proof as well.

Current NNs are just a very simplistic way of modeling the human brain, to begin with, as we said. That certainly makes neurobiologists chuckle.
The other point is comparing relatively small NNs trained with relatively small datasets compared to what a human brain is and what it's exposed to during someone's life. IIRC, I think it's estimated that, just for the number of neurons and their interconnections (so assuming we model all that right, which is again dubious at this point) would require as much matter to run that there is estimated in our known universe, or something. So yeah. We can keep playing with toys. Hey, I like toys. I don't pretend they are what they aren't. (Just like I don't pretend that a crappy Tiktok video is worth just as much as a good feature film, something a few seem to have no problem claiming these days.)

With that said, claiming that since humans can be wrong as well, we should not worry about AI being wrong is one pretty nice fallacy. An almost textbook strawman argument.
« Last Edit: December 28, 2021, 06:55:51 pm by SiliconWizard »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #131 on: December 28, 2021, 07:23:25 pm »
Yes, all this is nice, but again, the question of liability remains stubbornly unanswered.
It can be freaking annoying when questions are asked and nobody cares to answer.
And yes, when humans are in a position of making important decisions, they ARE liable.

This definitely IS a pressing question, that anyone serious IS actually asking. Even, and I'd say, in particular, those that are actively using or working on AI systems! Just read the article. And many others. Even Musk, which uses AI every time he can, says that.

The liability question is a touchstone question: the answer will decide the date of industries and/or individuals.

When it was announced, a few years ago, that there would be UK trials and experiments with autonomous vehicles, the insurance industry was a participant in the relevant studies. I haven't heard the results.

Currently Tesla is being deceitful. It lets people believe the cars are driverless, but if when there is an accident the responsibility is dumped on the driver.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #132 on: December 28, 2021, 07:28:39 pm »
Currently Tesla is being deceitful. It lets people believe the cars are driverless, but if when there is an accident the responsibility is dumped on the driver.

Yep. We talked about it earlier.
Funnily enough, Elon Musk talks about that on a regular basis. I've heard a few talks in which he was saying that we need to regulate all this as soon as possible, and he seems pretty conscious of all the risks of letting it unregulated. But that's sweet talk. Meanwhile, he's still perfectly OK with selling cars without this regulatory frame that he claims is needed, and takes advantage of the lack of regulation.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8172
  • Country: fi
Re: Machine Learning Algorithms
« Reply #133 on: December 29, 2021, 11:34:55 am »
And no, current AI is absolutely NOTHING like human intelligence.

Indeed; even if NN could theoretically work like human, current implementations are at the level of ant brain due to physical (electric) limitations, and given current rate of development (improvements in numbers of transistors etc.), maybe we are there in year 2500.

This is something laymen or NN/AI fanboys easily overlook. The results are maybe encouraging, but their target of human-like intelligence is really far away.

This being said, ants can do pretty amazing things. But they can't drive a car or prosecute.

A human can, by utilizing their human brain and classical non-AI algorithms, organize massive amounts of high-quality training data to make a simple (low number of neurons) ant-level NN behave seemingly much better than expected from their animal counterparts, but this is not real intelligence, it's a gimmick to hide the primitive level of intelligence. The result is total lack of complex context understanding, even if they perform well in constrained classification tasks.
« Last Edit: December 29, 2021, 11:40:48 am by Siwastaja »
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #134 on: December 29, 2021, 12:12:23 pm »
Two examples over the last couple of days for the AI/ML fanbois to consider...

What training set would guarantee that nothing like this would occur?
Quote
Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a "challenge to do".

"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.
https://www.bbc.co.uk/news/technology-59810383

Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)
Quote
In a scenario that's part "Robocop" and part "Minority Report," researchers in China have created an AI that can reportedly identify crimes and file charges against criminals. Futurism reports:
The AI was developed and tested by the Shanghai Pudong People's Procratorate, the country's largest district public prosecution office, South China Morning Post reports. It can file a charge with more than 97 percent accuracy based on a description of a suspected criminal case. "The system can replace prosecutors in the decision-making process to a certain extent," the researchers said in a paper published in Management Review seen by SCMP.
https://yro.slashdot.org/story/21/12/27/2129202/china-created-ai-prosecutor-that-can-charge-people-with-crimes
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #135 on: December 29, 2021, 05:44:58 pm »
This is something laymen or NN/AI fanboys easily overlook. The results are maybe encouraging, but their target of human-like intelligence is really far away.

Yes, but again I think they get confused because of the apparent power of these tools compared to what *we* can do, for certain kinds of analysis or calculations. Heck, that's nothing new: even a basic calculator can do infinitely faster calculations, with a much lower probability of getting them wrong, than any human could do. Or same for statistically analyzing huge amounts of data. Digital tools are very good at that, and thus, are very useful. They are tools. We've been making tools to help us with many different tasks for as long as the human species, in its various forms, exists, and tools are exactly that: to help us do things we could not do without them - or at least do those things more efficienctly, faster, etc.

The "tipping point" here appears as soon as we claim to design, and use tools that can not just help us, but replace us.
The associated issue, as I stressed out repeatedly, is liability. But I think it's deeply related to the above: if a tool is still defined as a tool, then the chain of liability is the classic: usually, the user is liable if it can be proven that they used the tool improperly, provided that the proper use was duly described in a user's manual, clearly enough for every potential user to understand it, and not missing critical information (or, of course, if the tool is very simple, that "proper" use was trivial to infer). Then, if that's not the case, the liability will be placed on the next item of the chain. Could be the reseller if they failed to give proper direction to the buyer when they sold the tool. Otherwise, it will go to the vendor. They can themselves turn to one of their subcontractors, if some subcontracted work or part is faulty, etc.

All of this shatters in pieces when you start using automated decision tools. Interestingly, even in that case we could use the above process for determining liability, except that it's such a large can of worms that it becomes very hard to determine - and then, this fuzziness is also very convenient for all people involved.

Although not just dealing with those issues, but being more general, this is interesting: https://www.jurist.org/news/2021/10/chile-becomes-first-country-to-pass-neuro-rights-law/
« Last Edit: December 29, 2021, 05:47:53 pm by SiliconWizard »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #136 on: December 31, 2021, 06:49:14 pm »
I've been away for a few days and, coming back to this, it looks to me like "the anti-singularity crowd" hijacked this thread surprisingly early and no one caught it, including myself.  The argument that I see is essentially that the entire concept is fundamentally evil because we don't understand it, therefore all further development of it in any field should be banned.  (That ban a lost cause already.  You can disagree with widespread AI all you want, but it's still going to happen.  You might delay it, but that's *all* you can do.)  Very little about how to pursue it responsibly.

For example, it should probably work in a restricted space, so that the classic and most difficult problems become irrelevant.  It doesn't need to account for human idiots on the road because they're physically blocked from having an influence, both by non-entry for non-compliant vehicles, and by disabling the detailed controls with no way to get them back while in that area.  (security first, on every front, not as an afterthought on a "cool silicon valley toy")  If you want to drive manually, then you stay off that road.  Period.
Not much different at that point, from an automated rail line like you might find at a large airport, except you might keep a personal "powered train car" in your garage.  (or maybe there's a massive automated taxi service and ALL human driving is banned, or...)

For Criminal Justice and other bureaucratic functions, two things need to happen (you're free to argue "good luck" on both |O):
  • It needs to be drilled-in, constantly, well beyond the point of being offensive to the trainees, that this is NOT a god!  Any who show that they still don't understand that, need to be banned from bureaucracy for life.  Not just the position where they showed it, but *any* bureaucratic position, *anywhere*.  Yes, that's harsh, but the harshness doesn't diminish the requirement.  (can you tell I don't like vogons?)
  • "Retraining" to correct bias, should apply both to the AI and to humans.  That bias can never be removed completely, so it'll always be an ongoing process, and we'll quickly get to the problem of, "what is unbiased anyway?"  Especially when narrow-minded political interests are involved.  (including everything from the CCP to pretty much every special interest group in the Western World)  So before we get too serious about that, maybe we need to fix the general attitude so that we're not so narcissistic on every level.  That by itself is far from trivial.



None of that, however, is what the OP had in mind.  For reference, the original post is:

I don't understand how do we choose  algorithms in Machine Learning. For Example I need  to make model that identify which flower is on the plan t. google search show that we will need CNN algorithm but  I don't understand why the CNN is only useful for this project
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #137 on: December 31, 2021, 08:02:11 pm »
Those straw man arguments completely fail to respond to - let alone answer - the points made. Simply dismissing other peoples' points because you haven't bothered to consider their validity is very unimpressive.

That make you look like a TruFan zealot, without judgement.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #138 on: January 01, 2022, 12:37:09 am »
Those straw man arguments completely fail to respond to - let alone answer - the points made. Simply dismissing other peoples' points because you haven't bothered to consider their validity is very unimpressive.

That make you look like a TruFan zealot, without judgement.

You sound to me like just as much of a strawman as you accuse me of.
 

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #139 on: January 01, 2022, 04:58:45 pm »
Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)

I took this to read that 97% of charges resulted in successful prosecution.
Compare that to the UK,  where the error rate is of the order of 20%:
https://www.cps.gov.uk/publication/cps-data-summary-quarter-1-2020-2021
(Last two quarters quoted had successful prosecution rates of 84% and 78%).

This sounds like a sensible way of doing things. Use the AI to decide when to take the case to court, then let the humans in the court make the final decision.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #140 on: January 01, 2022, 06:29:24 pm »
Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)

I took this to read that 97% of charges resulted in successful prosecution.
Compare that to the UK,  where the error rate is of the order of 20%:
https://www.cps.gov.uk/publication/cps-data-summary-quarter-1-2020-2021
(Last two quarters quoted had successful prosecution rates of 84% and 78%).

This sounds like a sensible way of doing things. Use the AI to decide when to take the case to court, then let the humans in the court make the final decision.

There are several possible interpretations. Who knows what it means?!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: Machine Learning Algorithms
« Reply #141 on: January 01, 2022, 08:22:45 pm »
Same things can be said about humans.
Up to a point, if we see something peculiar which suggests an optical illusion or unknown configuration of a known object type we can switch from recognition to reasoning. Constructing models of how the underlying image could correspond to known examples through all the possible real world transformations our experience trained neural network can come up with. It's not very fast, but often still fast enough to be useful during say driving.

Reasoning is not a once through process, it's the domain of hard AI.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #142 on: January 01, 2022, 08:31:39 pm »
Of course. And the fact current AI is very, very far from human reasoning is not even debated anywhere except among laymen, businessmen and politicians.
Absolutely no researcher in AI will ever claim that. If you ever find one, do question their scientific background, intellectual honesty and possible conflicts of interest.
 
The following users thanked this post: Siwastaja

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: Machine Learning Algorithms
« Reply #143 on: January 01, 2022, 09:27:58 pm »
Yet a researcher who is perfectly happy spending their entire career on once through classifiers/predictors goes along with everyone calling them AI researchers.

Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #144 on: January 01, 2022, 09:40:08 pm »
Yet a researcher who is perfectly happy spending their entire career on once through classifiers/predictors goes along with everyone calling them AI researchers.

Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.

Oh, yeah. As I think I already said, even "AI" here is a misnomer, I agree, but most "honest" researchers I've seen actually call that "machine learning" exclusively, and not AI.
The "AI" term itself is marketing, and to be fair, the OP themselves didn't use it.

But this term is not neutral, it's a powerful communication tool. We would probably not let "machine learning", in these terms, make critical decisions. But once it's coined "AI", then everything seems to be possible.
« Last Edit: January 01, 2022, 09:43:11 pm by SiliconWizard »
 

Online NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9018
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Machine Learning Algorithms
« Reply #145 on: January 01, 2022, 11:35:17 pm »
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.
Aren't commercial aircraft autopilots basically equivalent to Level 2 (pilot must be ready to take control at any time) which is what Tesla's system is? I think the real problem is that the general public doesn't really understand what an aircraft autopilot does.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #146 on: January 02, 2022, 02:57:21 am »
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.
Aren't commercial aircraft autopilots basically equivalent to Level 2 (pilot must be ready to take control at any time) which is what Tesla's system is? I think the real problem is that the general public doesn't really understand what an aircraft autopilot does.

EXACTLY.

Many autopilots in small planes can maintain a set heading and altitude, and nothing more. It lets you take your hands off the controls so that you can stretch, check your charts, communicate on the radio etc. It will happily fly you into the side of a mountain, if there is one there. The same if you tell it to hold altitude but your engine power is or becomes insufficient for some reason -- retarded throttle, lack of fuel, carb ices up etc etc. You'll get slower and slower until the autopilot stalls you. The autopilots in small turboprops will disengage and sound an alert below a certain minimum speed: 99 knots for the G1000 in a Quest Kodiak for example, and probably similar in a Cessna Caravan. Those are multi-million dollar planes. I'm not sure if the autopilots in $50k (used) Cessnas and Pipers will do that -- I've just asked a friend with a "turbo" (charged) Piper Arrow and will report back.

Slightly better autopilots can maintain a set rate of climb or descent to a pre-programmed altitude. But they don't do anything to make sure you have enough engine power to do this, or to prevent you exceeding your maximum speed on a descent.

Slightly better autopilots will enable you to automatically follow a VOR radial or a GPS track. And maybe even program in a short sequence of paths from one radio beacon or GPS location to another. Now we're getting to that Garmin G1000 in the Caravan or Kodiak.

But if there's a mountain in the way, they'll happily fly you straight into it. Or into another plane. Or into a storm.

What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #147 on: January 02, 2022, 03:05:05 am »
What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.

That's the point. You can't claim they are exactly the same, when they are absolutely not. Autopilots for aircrafts do not have to implement obstacle avoidance, nor follow complex routes at the scale of less than 1 meter. Which are the very hard part of those cars' autopilots.

Oh, and avionics systems are designed and tested with stringent methods. Not quite the same level as automotive.

So, those car autopilots are much more complex indeed, designed with a bit easier regulatory frame, and using technology that we don't completely master. Yeah.

(Of course, on top of that, we can also mention that aircraft pilots are trained professionals, which your average Joe that can buy one of those cars isn't. And, he even never had any training, let alone exam, involving the autopilot function. That's a major issue. If anything, being legally authorized to drive a car with autopilot, should, IMO, require training and an exam, and be mentioned in your driver's license.)
« Last Edit: January 02, 2022, 03:07:04 am by SiliconWizard »
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #148 on: January 02, 2022, 03:56:50 am »
My 2008 Subaru [1] has camera-based adaptive cruise control and I use it a huge percentage of the time that I'm driving in traffic, whether city or highway. Awesome that it operates right down to torque converter creep speed (or below, with brakes).

I think it's pretty much an optimum level of automation. You need to steer, so you need to watch the road, but it's amazing how much cognitive load it removes, based on how much longer I can drive without being fatigued.

It's not as good as the 2017 Outback I owned in California in 2019, but it's good enough. And the car is much more fun and cost me 1/4 as much to buy :-)


[1] yes, 2008, not 2018! "2.5XT EyeSight", a world first, they claimed https://www.subaru.co.jp/news/archives/08_04_06/08_05_08_02.html They were pretty expensive when new, but there are a lot of used ones coming into NZ the last year or two at $10k to $12k or so (around USD $7k to $8k) with 80000 km / 50000 miles or so. I imagine probably the UK too.

https://www.fuelly.com/car/subaru/outback/2008/brucehoult/1005227
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #149 on: January 02, 2022, 09:58:56 am »
What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.

That's the point. You can't claim they are exactly the same, when they are absolutely not. Autopilots for aircrafts do not have to implement obstacle avoidance, nor follow complex routes at the scale of less than 1 meter. Which are the very hard part of those cars' autopilots.

Oh, and avionics systems are designed and tested with stringent methods. Not quite the same level as automotive.

So, those car autopilots are much more complex indeed, designed with a bit easier regulatory frame, and using technology that we don't completely master. Yeah.

(Of course, on top of that, we can also mention that aircraft pilots are trained professionals, which your average Joe that can buy one of those cars isn't. And, he even never had any training, let alone exam, involving the autopilot function. That's a major issue. If anything, being legally authorized to drive a car with autopilot, should, IMO, require training and an exam, and be mentioned in your driver's license.)

Just so.

Firstly, inadequate testing and training is no longer limited to road vehicles.

Secondly, untrained drivers have the Dunning-Kruger syndrome: they don't know that they don't know what the "autopilot" won't do properly.

Both those are illustrated with the Boeing 737 Max. And that's in the much simpler environment!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #150 on: January 02, 2022, 10:01:59 am »
My 2008 Subaru [1] has camera-based adaptive cruise control and I use it a huge percentage of the time that I'm driving in traffic, whether city or highway. Awesome that it operates right down to torque converter creep speed (or below, with brakes).

I think it's pretty much an optimum level of automation. You need to steer, so you need to watch the road, but it's amazing how much cognitive load it removes, based on how much longer I can drive without being fatigued.

I haven't driven one, but that seems reasonable. The key point is that all concerned know the driver is always in control.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #151 on: January 02, 2022, 10:39:17 am »
By the way, even with now 13 year old technology it detects and slows/brakes for cyclists and pedestrians, something that even current radar based systems struggle with. On the other hand it switches off (with a lot of beeps) if you drive directly at the setting or rising sun (especially uphill). Radar ones probably cope with that.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #152 on: January 02, 2022, 10:46:54 am »
By the way, even with now 13 year old technology it detects and slows/brakes for cyclists and pedestrians, something that even current radar based systems struggle with. On the other hand it switches off (with a lot of beeps) if you drive directly at the setting or rising sun (especially uphill). Radar ones probably cope with that.

Out of curiosity, do you remember whether the marketing (and instruction book) were open and honest about what the system didn't do? That's a problem with Musk and the youngsters that haven't seen the fundamental limitations of ML techniques.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #153 on: January 02, 2022, 12:21:06 pm »
By the way, even with now 13 year old technology it detects and slows/brakes for cyclists and pedestrians, something that even current radar based systems struggle with. On the other hand it switches off (with a lot of beeps) if you drive directly at the setting or rising sun (especially uphill). Radar ones probably cope with that.

Out of curiosity, do you remember whether the marketing (and instruction book) were open and honest about what the system didn't do? That's a problem with Musk and the youngsters that haven't seen the fundamental limitations of ML techniques.

The manual is in Japanese.

I don't think there's a lot of ML involved. Doing edge detection is a fairly straightforward convolution operation and then comparing edges on a stereo pair to find range -- and comparing range over time to find relative speed -- is just math.

My typical following distance at 100 km/h is about 80 m (I use the most distant of the three settings) and the system quickly and accurately determines the relative velocity of cars at about twice that distance. Anything beyond that is ignored.

I think it uses steering wheel angle to decide which vehicle ahead of you is in your lane. There is certainly image processing to find lane markers near to you for the lane departure warning, but I don't know whether it uses image processing to find the lanes out 80 to 100 m in front.
 

Offline Marco

  • Super Contributor
  • ***
  • Posts: 6721
  • Country: nl
Re: Machine Learning Algorithms
« Reply #154 on: January 02, 2022, 02:46:19 pm »
I think the real problem is that the general public doesn't really understand what an aircraft autopilot does.

Which returns us to the fact that Elon knows exactly what he's doing and it's not being honest.

Level 2 will statistically be able to keep the plane flying till fuel runs out at a couple 9s, tesla not so much, the sky is a slightly easier environment.
« Last Edit: January 02, 2022, 03:04:16 pm by Marco »
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #155 on: January 02, 2022, 05:47:18 pm »
Firstly, inadequate testing and training is no longer limited to road vehicles.

True, even though that's still going to be a much severe problem.

Secondly, untrained drivers have the Dunning-Kruger syndrome: they don't know that they don't know what the "autopilot" won't do properly.
Both those are illustrated with the Boeing 737 Max. And that's in the much simpler environment!

Yep! Informing users should always be top priority.

As I said, if anything, those driving aids should require a specific license to be allowed to use them, IMO.
And, of course, as said, on the road, the systems are much more complex and with less control and a lighter regulatory environment. Nice recipe for a disaster.

As to accidents, the analysis part here again is completely different. Plane crashes are fully investigated, sometimes for several years, until the root cause is determined with sufficient evidence and reasonable certainty. Car crashes? That's maybe a few hours and we move on. Very rarely more than this.

Even though the risks are definitely an issue, I would be kinda OK with this technology being deployed *as long as the vendors take full responsibility*, at least for the time being, as long as the technology is not yet fully proven, and the regulatory and legal frames are fuzzy. Of course, once all that becomes settled, we could switch to the usual chain of liability. But not at this point. That, and requiring much deeper investigation in case of accidents, but that's probably not going to happen. The cost would be gigantic.

People promoting new, risky and/or unproven tech and not being liable for the consequences are a major danger. Talk is ultra cheap.
As soon as they are held liable, things start to fall into place, deployments become reasonable and due care is taken, etc.

The tech is never the problem.
« Last Edit: January 02, 2022, 05:58:00 pm by SiliconWizard »
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #156 on: January 02, 2022, 09:38:55 pm »
Even though the risks are definitely an issue, I would be kinda OK with this technology being deployed *as long as the vendors take full responsibility*

"Take responsibility" in what way, precisely?

Does Ford take responsibility when someone crashes one of their cars?

For quite some years now already, Tesla cars on "autopilot" crash fewer times per million km than cars driven by humans. They crash in different ways, and when they crash they attract far more publicity than other car crashes. But they have been overall safer -- and improving rapidly (which humans aren't) -- for a long time.

Perfect safety doesn't and can't exist.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #157 on: January 02, 2022, 09:55:59 pm »
Even though the risks are definitely an issue, I would be kinda OK with this technology being deployed *as long as the vendors take full responsibility*

"Take responsibility" in what way, precisely?

Does Ford take responsibility when someone crashes one of their cars?

For quite some years now already, Tesla cars on "autopilot" crash fewer times per million km than cars driven by humans. They crash in different ways, and when they crash they attract far more publicity than other car crashes. But they have been overall safer -- and improving rapidly (which humans aren't) -- for a long time.

Perfect safety doesn't and can't exist.

Take responsibility for the claims made for its performance limits and design failures. Like they infamously didn't with the Pinto.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online brucehoult

  • Super Contributor
  • ***
  • Posts: 4037
  • Country: nz
Re: Machine Learning Algorithms
« Reply #158 on: January 02, 2022, 10:51:32 pm »
Take responsibility for the claims made for its performance limits and design failures. Like they infamously didn't with the Pinto.

Not a great example.

Later analysis shows the Pinto wasn't significantly different to or worse than its peers, all of which had similar designs.

The whole thing was a hatchet job by Ralph Nader and 60 Minutes. The famous video was basically faked.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 19508
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #159 on: January 03, 2022, 09:41:01 am »
Take responsibility for the claims made for its performance limits and design failures. Like they infamously didn't with the Pinto.

Not a great example.

Later analysis shows the Pinto wasn't significantly different to or worse than its peers, all of which had similar designs.

The whole thing was a hatchet job by Ralph Nader and 60 Minutes. The famous video was basically faked.

So, it was the whole class of vehicles that had similar inherent design flaws? That seems directly relevant to ML!

Are you saying that the design flaws didn't roast people, that calculations about compensation vs rectification costs weren't made, and that tightened laws weren't needed?
« Last Edit: January 03, 2022, 09:43:38 am by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14475
  • Country: fr
Re: Machine Learning Algorithms
« Reply #160 on: January 03, 2022, 05:53:48 pm »
Even though the risks are definitely an issue, I would be kinda OK with this technology being deployed *as long as the vendors take full responsibility*

"Take responsibility" in what way, precisely?

Even if you did that because you were only interested in the "responsibility" part in my statement, please don't partially quote, especially if after that you use a strawman argument ('any car marker for any car crash'.)

So, in the way that again brand-new technology of this kind is still largely unknown territory with potentially unexpected behavior (as was discussed in this thread), so pushing it hard to customers without informing them enough (and training them!) and without the legal frame is irresponsible.

In case of a crash, as I mentioned above, I think exceptional investigations should take place, until again the tech is completely proven. And, of course ideally, those investigations should be led by 100% independent organizations. Would you trust Boeing or Airbus to lead their own investigation, and them only, in case of a plane crash? But we get back to points we have made earlier - ML-based systems are extremely hard to analyze. Given those points, I do think the vendors should be considered liable until it's fully proven it was the user's fault. Right now, it's mostly vendors having a quick look at the accident and claiming, in almost all cases, that "no technical issue was found, and so it 's entirely the driver's fault". Where are the independent labs challenging that? Is this acceptable? Come on. And it's like authorities almiost don't care either, because "it's this all new shiny tech that is obviously the future and we can't hinder that progress". To me, it sounds more like a child's approach to safety than making responsible use of technology.

Does Ford take responsibility when someone crashes one of their cars?

As I just said - strawman argument. It's as though you hadn't read any of the now 7 pages of the discussion.

For quite some years now already, Tesla cars on "autopilot" crash fewer times per million km than cars driven by humans.

Ditto. It's as though you're just ignoring 7 pages of discussion here. We already went into that.
So no, not everything is equal to everything, no, safety is not just a matter of numbers in an Excel sheet (I know it's a popular way of dealing with everything these days), and yes, we can make figures say about just anything.

Statistics so far don't mean squat anyway, since there are far, far too few cars with those features on the road to get anything significant and comparable with the decades of driving cars on the roads.

But even if that ended getting us better "figures", is that the end of the story? Nope.
« Last Edit: January 03, 2022, 06:01:06 pm by SiliconWizard »
 

Offline rstofer

  • Super Contributor
  • ***
  • Posts: 9890
  • Country: us
Re: Machine Learning Algorithms
« Reply #161 on: January 07, 2022, 05:48:54 pm »
Does Ford take responsibility?  Beats me!

But Chevy has a battery problem with ALL Bolt EVs and there have been about 12 fires out of 100,000+ vehicles.  They are working toward replacing all of the batteries in all of the cars.  So, yes, Chevy takes responsibility.  And the replacement comes with a 100,000 mile warranty!

This is a serious issue, I own one of the Bolt EVs.

https://www.cars.com/articles/gm-announces-fix-for-chevrolet-bolt-ev-battery-problems-441536/

No AI involved (yet) but Chevy is trying to do right even though the number of fires is quite low.

I am NEVER going to trust my life to some AI driving my vehicle.  It's NEVER going to happen!  If I'm not driving, I'm not going!
« Last Edit: January 07, 2022, 11:34:28 pm by rstofer »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf