Author Topic: Machine Learning Algorithms  (Read 25187 times)

0 Members and 1 Guest are viewing this topic.

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #125 on: December 28, 2021, 09:25:52 am »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.

And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)

Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2010
  • Country: fi
Re: Machine Learning Algorithms
« Reply #126 on: December 28, 2021, 10:29:51 am »
Off Topic

I tried to find an old AI article where AI were possibly learning how to use D-flops.
The goal was a beep sound from a switch click or vice versa.
Some unorthodox ways were also present.

Maybe someone here can remember it.
Google is pretty useless.
Advance-Aneng-Appa-AVO-Beckman-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #127 on: December 28, 2021, 04:52:35 pm »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.

And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)

Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.

Yes, they do.  That's an unavoidable problem with training-based AI.  Any bias in the training data will be reflected in the results, and the training data is ALWAYS biased in some way or another.  But it *does* give the opportunity to step back, once it's running, and see in 3rd person what your biases were and probably still are.  Instead of carrying those biases forever because you think they're "just how the world works, deal with it" when only seen from that close, you have the opportunity to correct them instead, by stepping back to see them in the first place and then providing some counter-training to the AI.

And that's also the answer to why it generated a particular result.  That was the average of all the training data that it had to work with.  A handwritten digit decoder, for example, that was never given a blank, will always offer a number, even if it's later given a blank.  That's a simple example, but I think you can extrapolate it to see how hard it is to create a good set of training data.  Thus, anyone who practically worships an infallible machine, should themselves be removed from the process.  But the machine should stay and continue to be refined.

I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is.  We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer.  No difference there whatsoever.  When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves.  So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #128 on: December 28, 2021, 05:44:16 pm »
This is actually what real judges try to do.
They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.

So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.

And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.

And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)

Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.

Yes, they do.  That's an unavoidable problem with training-based AI.  Any bias in the training data will be reflected in the results, and the training data is ALWAYS biased in some way or another.  But it *does* give the opportunity to step back, once it's running, and see in 3rd person what your biases were and probably still are.  Instead of carrying those biases forever because you think they're "just how the world works, deal with it" when only seen from that close, you have the opportunity to correct them instead, by stepping back to see them in the first place and then providing some counter-training to the AI.

Nice idea, but even supposedly intelligent people don't do that, unfortunately.

If the output matches someone's desires or objectives or prejudices, they won't want to look further.

Consider agile continuous integration software development. Frequently coders are happy that the unit tests give a green light, saying that means the code is working. That is nonsense of course, because it depends on the quality of the requirements and quality of the tests. For example, when asked which unit tests proved ACID properties, they look blank and can't conceive that unit tests cannot prove ACID properties.

Quote
And that's also the answer to why it generated a particular result.  That was the average of all the training data that it had to work with.  A handwritten digit decoder, for example, that was never given a blank, will always offer a number, even if it's later given a blank.  That's a simple example, but I think you can extrapolate it to see how hard it is to create a good set of training data.  Thus, anyone who practically worships an infallible machine, should themselves be removed from the process.  But the machine should stay and continue to be refined.

Firstly the result isn't the "average" of the input: there is no way of knowing how close the decision is to a breakpoint. There are many examples of single pixel changes in images causing the classification to be completely different.

Secondly, if you simply thow more examples into the pot, you will probably just get different false classifications.

Quote
I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is.  We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer.  No difference there whatsoever.  When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves.  So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.

There's a lot of "ifs" in there, which aren't justified.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #129 on: December 28, 2021, 06:20:13 pm »
Yes, all this is nice, but again, the question of liability remains stubbornly unanswered.
It can be freaking annoying when questions are asked and nobody cares to answer.
And yes, when humans are in a position of making important decisions, they ARE liable.

This definitely IS a pressing question, that anyone serious IS actually asking. Even, and I'd say, in particular, those that are actively using or working on AI systems! Just read the article. And many others. Even Musk, which uses AI every time he can, says that.

And no, current AI is absolutely NOTHING like human intelligence. The fact NNs are now the main tool used in machine learning seems to give the illusion somehow, at least to the uninformed. NNs are barely mimicking interacting neurons in a very simplistic way. NNs are a very cool tool for finding patterns in very large datasets, and they work rather well for that. Most of it is machine learning. AI is largely a misnomer, and whether it even actually qualifies as "intelligence" - supposing we can define, without resorting to circular logic, what it is - does, IMHO, not matter one bit. We have to stop with that fallacy. IMHO. It's just a nice tool, and we should treat it as any other tool.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #130 on: December 28, 2021, 06:53:24 pm »
Firstly the result isn't the "average" of the input: there is no way of knowing how close the decision is to a breakpoint. There are many examples of single pixel changes in images causing the classification to be completely different.

Secondly, if you simply thow more examples into the pot, you will probably just get different false classifications.

Yup. NNs are absolutely non-linear. Which makes analyzing them an intractable problem.
While the inputs of an "artificial neuron" are combined in a linear fashion, the output goes through an activation function to get any useful result out of them. This function is almost never linear.

Quote
I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is.  We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer.  No difference there whatsoever.  When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves.  So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.

There's a lot of "ifs" in there, which aren't justified.

Yeah. A lot of assertions that aren't backed by any proof as well.

Current NNs are just a very simplistic way of modeling the human brain, to begin with, as we said. That certainly makes neurobiologists chuckle.
The other point is comparing relatively small NNs trained with relatively small datasets compared to what a human brain is and what it's exposed to during someone's life. IIRC, I think it's estimated that, just for the number of neurons and their interconnections (so assuming we model all that right, which is again dubious at this point) would require as much matter to run that there is estimated in our known universe, or something. So yeah. We can keep playing with toys. Hey, I like toys. I don't pretend they are what they aren't. (Just like I don't pretend that a crappy Tiktok video is worth just as much as a good feature film, something a few seem to have no problem claiming these days.)

With that said, claiming that since humans can be wrong as well, we should not worry about AI being wrong is one pretty nice fallacy. An almost textbook strawman argument.
« Last Edit: December 28, 2021, 06:55:51 pm by SiliconWizard »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #131 on: December 28, 2021, 07:23:25 pm »
Yes, all this is nice, but again, the question of liability remains stubbornly unanswered.
It can be freaking annoying when questions are asked and nobody cares to answer.
And yes, when humans are in a position of making important decisions, they ARE liable.

This definitely IS a pressing question, that anyone serious IS actually asking. Even, and I'd say, in particular, those that are actively using or working on AI systems! Just read the article. And many others. Even Musk, which uses AI every time he can, says that.

The liability question is a touchstone question: the answer will decide the date of industries and/or individuals.

When it was announced, a few years ago, that there would be UK trials and experiments with autonomous vehicles, the insurance industry was a participant in the relevant studies. I haven't heard the results.

Currently Tesla is being deceitful. It lets people believe the cars are driverless, but if when there is an accident the responsibility is dumped on the driver.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #132 on: December 28, 2021, 07:28:39 pm »
Currently Tesla is being deceitful. It lets people believe the cars are driverless, but if when there is an accident the responsibility is dumped on the driver.

Yep. We talked about it earlier.
Funnily enough, Elon Musk talks about that on a regular basis. I've heard a few talks in which he was saying that we need to regulate all this as soon as possible, and he seems pretty conscious of all the risks of letting it unregulated. But that's sweet talk. Meanwhile, he's still perfectly OK with selling cars without this regulatory frame that he claims is needed, and takes advantage of the lack of regulation.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8179
  • Country: fi
Re: Machine Learning Algorithms
« Reply #133 on: December 29, 2021, 11:34:55 am »
And no, current AI is absolutely NOTHING like human intelligence.

Indeed; even if NN could theoretically work like human, current implementations are at the level of ant brain due to physical (electric) limitations, and given current rate of development (improvements in numbers of transistors etc.), maybe we are there in year 2500.

This is something laymen or NN/AI fanboys easily overlook. The results are maybe encouraging, but their target of human-like intelligence is really far away.

This being said, ants can do pretty amazing things. But they can't drive a car or prosecute.

A human can, by utilizing their human brain and classical non-AI algorithms, organize massive amounts of high-quality training data to make a simple (low number of neurons) ant-level NN behave seemingly much better than expected from their animal counterparts, but this is not real intelligence, it's a gimmick to hide the primitive level of intelligence. The result is total lack of complex context understanding, even if they perform well in constrained classification tasks.
« Last Edit: December 29, 2021, 11:40:48 am by Siwastaja »
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #134 on: December 29, 2021, 12:12:23 pm »
Two examples over the last couple of days for the AI/ML fanbois to consider...

What training set would guarantee that nothing like this would occur?
Quote
Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.

The suggestion came after the girl asked Alexa for a "challenge to do".

"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.
https://www.bbc.co.uk/news/technology-59810383

Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)
Quote
In a scenario that's part "Robocop" and part "Minority Report," researchers in China have created an AI that can reportedly identify crimes and file charges against criminals. Futurism reports:
The AI was developed and tested by the Shanghai Pudong People's Procratorate, the country's largest district public prosecution office, South China Morning Post reports. It can file a charge with more than 97 percent accuracy based on a description of a suspected criminal case. "The system can replace prosecutors in the decision-making process to a certain extent," the researchers said in a paper published in Management Review seen by SCMP.
https://yro.slashdot.org/story/21/12/27/2129202/china-created-ai-prosecutor-that-can-charge-people-with-crimes
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #135 on: December 29, 2021, 05:44:58 pm »
This is something laymen or NN/AI fanboys easily overlook. The results are maybe encouraging, but their target of human-like intelligence is really far away.

Yes, but again I think they get confused because of the apparent power of these tools compared to what *we* can do, for certain kinds of analysis or calculations. Heck, that's nothing new: even a basic calculator can do infinitely faster calculations, with a much lower probability of getting them wrong, than any human could do. Or same for statistically analyzing huge amounts of data. Digital tools are very good at that, and thus, are very useful. They are tools. We've been making tools to help us with many different tasks for as long as the human species, in its various forms, exists, and tools are exactly that: to help us do things we could not do without them - or at least do those things more efficienctly, faster, etc.

The "tipping point" here appears as soon as we claim to design, and use tools that can not just help us, but replace us.
The associated issue, as I stressed out repeatedly, is liability. But I think it's deeply related to the above: if a tool is still defined as a tool, then the chain of liability is the classic: usually, the user is liable if it can be proven that they used the tool improperly, provided that the proper use was duly described in a user's manual, clearly enough for every potential user to understand it, and not missing critical information (or, of course, if the tool is very simple, that "proper" use was trivial to infer). Then, if that's not the case, the liability will be placed on the next item of the chain. Could be the reseller if they failed to give proper direction to the buyer when they sold the tool. Otherwise, it will go to the vendor. They can themselves turn to one of their subcontractors, if some subcontracted work or part is faulty, etc.

All of this shatters in pieces when you start using automated decision tools. Interestingly, even in that case we could use the above process for determining liability, except that it's such a large can of worms that it becomes very hard to determine - and then, this fuzziness is also very convenient for all people involved.

Although not just dealing with those issues, but being more general, this is interesting: https://www.jurist.org/news/2021/10/chile-becomes-first-country-to-pass-neuro-rights-law/
« Last Edit: December 29, 2021, 05:47:53 pm by SiliconWizard »
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #136 on: December 31, 2021, 06:49:14 pm »
I've been away for a few days and, coming back to this, it looks to me like "the anti-singularity crowd" hijacked this thread surprisingly early and no one caught it, including myself.  The argument that I see is essentially that the entire concept is fundamentally evil because we don't understand it, therefore all further development of it in any field should be banned.  (That ban a lost cause already.  You can disagree with widespread AI all you want, but it's still going to happen.  You might delay it, but that's *all* you can do.)  Very little about how to pursue it responsibly.

For example, it should probably work in a restricted space, so that the classic and most difficult problems become irrelevant.  It doesn't need to account for human idiots on the road because they're physically blocked from having an influence, both by non-entry for non-compliant vehicles, and by disabling the detailed controls with no way to get them back while in that area.  (security first, on every front, not as an afterthought on a "cool silicon valley toy")  If you want to drive manually, then you stay off that road.  Period.
Not much different at that point, from an automated rail line like you might find at a large airport, except you might keep a personal "powered train car" in your garage.  (or maybe there's a massive automated taxi service and ALL human driving is banned, or...)

For Criminal Justice and other bureaucratic functions, two things need to happen (you're free to argue "good luck" on both |O):
  • It needs to be drilled-in, constantly, well beyond the point of being offensive to the trainees, that this is NOT a god!  Any who show that they still don't understand that, need to be banned from bureaucracy for life.  Not just the position where they showed it, but *any* bureaucratic position, *anywhere*.  Yes, that's harsh, but the harshness doesn't diminish the requirement.  (can you tell I don't like vogons?)
  • "Retraining" to correct bias, should apply both to the AI and to humans.  That bias can never be removed completely, so it'll always be an ongoing process, and we'll quickly get to the problem of, "what is unbiased anyway?"  Especially when narrow-minded political interests are involved.  (including everything from the CCP to pretty much every special interest group in the Western World)  So before we get too serious about that, maybe we need to fix the general attitude so that we're not so narcissistic on every level.  That by itself is far from trivial.



None of that, however, is what the OP had in mind.  For reference, the original post is:

I don't understand how do we choose  algorithms in Machine Learning. For Example I need  to make model that identify which flower is on the plan t. google search show that we will need CNN algorithm but  I don't understand why the CNN is only useful for this project
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #137 on: December 31, 2021, 08:02:11 pm »
Those straw man arguments completely fail to respond to - let alone answer - the points made. Simply dismissing other peoples' points because you haven't bothered to consider their validity is very unimpressive.

That make you look like a TruFan zealot, without judgement.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline AaronD

  • Frequent Contributor
  • **
  • Posts: 260
  • Country: us
Re: Machine Learning Algorithms
« Reply #138 on: January 01, 2022, 12:37:09 am »
Those straw man arguments completely fail to respond to - let alone answer - the points made. Simply dismissing other peoples' points because you haven't bothered to consider their validity is very unimpressive.

That make you look like a TruFan zealot, without judgement.

You sound to me like just as much of a strawman as you accuse me of.
 

Offline ralphrmartin

  • Frequent Contributor
  • **
  • Posts: 480
  • Country: gb
    • Me
Re: Machine Learning Algorithms
« Reply #139 on: January 01, 2022, 04:58:45 pm »
Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)

I took this to read that 97% of charges resulted in successful prosecution.
Compare that to the UK,  where the error rate is of the order of 20%:
https://www.cps.gov.uk/publication/cps-data-summary-quarter-1-2020-2021
(Last two quarters quoted had successful prosecution rates of 84% and 78%).

This sounds like a sensible way of doing things. Use the AI to decide when to take the case to court, then let the humans in the court make the final decision.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #140 on: January 01, 2022, 06:29:24 pm »
Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)

I took this to read that 97% of charges resulted in successful prosecution.
Compare that to the UK,  where the error rate is of the order of 20%:
https://www.cps.gov.uk/publication/cps-data-summary-quarter-1-2020-2021
(Last two quarters quoted had successful prosecution rates of 84% and 78%).

This sounds like a sensible way of doing things. Use the AI to decide when to take the case to court, then let the humans in the court make the final decision.

There are several possible interpretations. Who knows what it means?!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Online Marco

  • Super Contributor
  • ***
  • Posts: 6723
  • Country: nl
Re: Machine Learning Algorithms
« Reply #141 on: January 01, 2022, 08:22:45 pm »
Same things can be said about humans.
Up to a point, if we see something peculiar which suggests an optical illusion or unknown configuration of a known object type we can switch from recognition to reasoning. Constructing models of how the underlying image could correspond to known examples through all the possible real world transformations our experience trained neural network can come up with. It's not very fast, but often still fast enough to be useful during say driving.

Reasoning is not a once through process, it's the domain of hard AI.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #142 on: January 01, 2022, 08:31:39 pm »
Of course. And the fact current AI is very, very far from human reasoning is not even debated anywhere except among laymen, businessmen and politicians.
Absolutely no researcher in AI will ever claim that. If you ever find one, do question their scientific background, intellectual honesty and possible conflicts of interest.
 
The following users thanked this post: Siwastaja

Online Marco

  • Super Contributor
  • ***
  • Posts: 6723
  • Country: nl
Re: Machine Learning Algorithms
« Reply #143 on: January 01, 2022, 09:27:58 pm »
Yet a researcher who is perfectly happy spending their entire career on once through classifiers/predictors goes along with everyone calling them AI researchers.

Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #144 on: January 01, 2022, 09:40:08 pm »
Yet a researcher who is perfectly happy spending their entire career on once through classifiers/predictors goes along with everyone calling them AI researchers.

Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.

Oh, yeah. As I think I already said, even "AI" here is a misnomer, I agree, but most "honest" researchers I've seen actually call that "machine learning" exclusively, and not AI.
The "AI" term itself is marketing, and to be fair, the OP themselves didn't use it.

But this term is not neutral, it's a powerful communication tool. We would probably not let "machine learning", in these terms, make critical decisions. But once it's coined "AI", then everything seems to be possible.
« Last Edit: January 01, 2022, 09:43:11 pm by SiliconWizard »
 

Offline NiHaoMike

  • Super Contributor
  • ***
  • Posts: 9021
  • Country: us
  • "Don't turn it on - Take it apart!"
    • Facebook Page
Re: Machine Learning Algorithms
« Reply #145 on: January 01, 2022, 11:35:17 pm »
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.
Aren't commercial aircraft autopilots basically equivalent to Level 2 (pilot must be ready to take control at any time) which is what Tesla's system is? I think the real problem is that the general public doesn't really understand what an aircraft autopilot does.
Cryptocurrency has taught me to love math and at the same time be baffled by it.

Cryptocurrency lesson 0: Altcoins and Bitcoin are not the same thing.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
Re: Machine Learning Algorithms
« Reply #146 on: January 02, 2022, 02:57:21 am »
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest.  Most of the field has been disingenuously named for decades.
Aren't commercial aircraft autopilots basically equivalent to Level 2 (pilot must be ready to take control at any time) which is what Tesla's system is? I think the real problem is that the general public doesn't really understand what an aircraft autopilot does.

EXACTLY.

Many autopilots in small planes can maintain a set heading and altitude, and nothing more. It lets you take your hands off the controls so that you can stretch, check your charts, communicate on the radio etc. It will happily fly you into the side of a mountain, if there is one there. The same if you tell it to hold altitude but your engine power is or becomes insufficient for some reason -- retarded throttle, lack of fuel, carb ices up etc etc. You'll get slower and slower until the autopilot stalls you. The autopilots in small turboprops will disengage and sound an alert below a certain minimum speed: 99 knots for the G1000 in a Quest Kodiak for example, and probably similar in a Cessna Caravan. Those are multi-million dollar planes. I'm not sure if the autopilots in $50k (used) Cessnas and Pipers will do that -- I've just asked a friend with a "turbo" (charged) Piper Arrow and will report back.

Slightly better autopilots can maintain a set rate of climb or descent to a pre-programmed altitude. But they don't do anything to make sure you have enough engine power to do this, or to prevent you exceeding your maximum speed on a descent.

Slightly better autopilots will enable you to automatically follow a VOR radial or a GPS track. And maybe even program in a short sequence of paths from one radio beacon or GPS location to another. Now we're getting to that Garmin G1000 in the Caravan or Kodiak.

But if there's a mountain in the way, they'll happily fly you straight into it. Or into another plane. Or into a storm.

What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14489
  • Country: fr
Re: Machine Learning Algorithms
« Reply #147 on: January 02, 2022, 03:05:05 am »
What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.

That's the point. You can't claim they are exactly the same, when they are absolutely not. Autopilots for aircrafts do not have to implement obstacle avoidance, nor follow complex routes at the scale of less than 1 meter. Which are the very hard part of those cars' autopilots.

Oh, and avionics systems are designed and tested with stringent methods. Not quite the same level as automotive.

So, those car autopilots are much more complex indeed, designed with a bit easier regulatory frame, and using technology that we don't completely master. Yeah.

(Of course, on top of that, we can also mention that aircraft pilots are trained professionals, which your average Joe that can buy one of those cars isn't. And, he even never had any training, let alone exam, involving the autopilot function. That's a major issue. If anything, being legally authorized to drive a car with autopilot, should, IMO, require training and an exam, and be mentioned in your driver's license.)
« Last Edit: January 02, 2022, 03:07:04 am by SiliconWizard »
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4040
  • Country: nz
Re: Machine Learning Algorithms
« Reply #148 on: January 02, 2022, 03:56:50 am »
My 2008 Subaru [1] has camera-based adaptive cruise control and I use it a huge percentage of the time that I'm driving in traffic, whether city or highway. Awesome that it operates right down to torque converter creep speed (or below, with brakes).

I think it's pretty much an optimum level of automation. You need to steer, so you need to watch the road, but it's amazing how much cognitive load it removes, based on how much longer I can drive without being fatigued.

It's not as good as the 2017 Outback I owned in California in 2019, but it's good enough. And the car is much more fun and cost me 1/4 as much to buy :-)


[1] yes, 2008, not 2018! "2.5XT EyeSight", a world first, they claimed https://www.subaru.co.jp/news/archives/08_04_06/08_05_08_02.html They were pretty expensive when new, but there are a lot of used ones coming into NZ the last year or two at $10k to $12k or so (around USD $7k to $8k) with 80000 km / 50000 miles or so. I imagine probably the UK too.

https://www.fuelly.com/car/subaru/outback/2008/brucehoult/1005227
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 19520
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Machine Learning Algorithms
« Reply #149 on: January 02, 2022, 09:58:56 am »
What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.

That's the point. You can't claim they are exactly the same, when they are absolutely not. Autopilots for aircrafts do not have to implement obstacle avoidance, nor follow complex routes at the scale of less than 1 meter. Which are the very hard part of those cars' autopilots.

Oh, and avionics systems are designed and tested with stringent methods. Not quite the same level as automotive.

So, those car autopilots are much more complex indeed, designed with a bit easier regulatory frame, and using technology that we don't completely master. Yeah.

(Of course, on top of that, we can also mention that aircraft pilots are trained professionals, which your average Joe that can buy one of those cars isn't. And, he even never had any training, let alone exam, involving the autopilot function. That's a major issue. If anything, being legally authorized to drive a car with autopilot, should, IMO, require training and an exam, and be mentioned in your driver's license.)

Just so.

Firstly, inadequate testing and training is no longer limited to road vehicles.

Secondly, untrained drivers have the Dunning-Kruger syndrome: they don't know that they don't know what the "autopilot" won't do properly.

Both those are illustrated with the Boeing 737 Max. And that's in the much simpler environment!
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf