This is actually what real judges try to do.They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.
So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.
And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.
This is actually what real judges try to do.They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.
So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.
And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.
And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)
Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.
This is actually what real judges try to do.They can try all they want, but rich white people get lighter sentences all the time. You don't even have to cherry pick cases, they are all over the place.
So, I would rather see some circumstance not be taken into account in a strict algorithm, than let a random judge decision have a significant weight.
And then have a reasonably simple way of extending the system to take that circumstance into account in the following cases. We already sort of do this, just not very efficiently. And even when we do, we still fail to apply those new rules.
And such problems do demonstrably become unwittingly baked into the ML algorithms. (For references, read comp.risks and its archives for many examples)
Then, unlike with humans, it is not possible to "ask" the algorithm why it generated that result. "Because the (infallible) computer says so" is the result.
Yes, they do. That's an unavoidable problem with training-based AI. Any bias in the training data will be reflected in the results, and the training data is ALWAYS biased in some way or another. But it *does* give the opportunity to step back, once it's running, and see in 3rd person what your biases were and probably still are. Instead of carrying those biases forever because you think they're "just how the world works, deal with it" when only seen from that close, you have the opportunity to correct them instead, by stepping back to see them in the first place and then providing some counter-training to the AI.
And that's also the answer to why it generated a particular result. That was the average of all the training data that it had to work with. A handwritten digit decoder, for example, that was never given a blank, will always offer a number, even if it's later given a blank. That's a simple example, but I think you can extrapolate it to see how hard it is to create a good set of training data. Thus, anyone who practically worships an infallible machine, should themselves be removed from the process. But the machine should stay and continue to be refined.
I still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is. We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer. No difference there whatsoever. When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves. So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.
Firstly the result isn't the "average" of the input: there is no way of knowing how close the decision is to a breakpoint. There are many examples of single pixel changes in images causing the classification to be completely different.
Secondly, if you simply thow more examples into the pot, you will probably just get different false classifications.
QuoteI still argue that humans learn the same way as an AI does, by trial and error and smart self-correction, and that we're actually just as bad at explaining ourselves as a computer is. We have layers of understanding for general use, whereas most AI's so far only have one all-encompassing layer for a specific use, but our ability to explain any particular layer is just as impossible as it is for a computer to explain its one layer. No difference there whatsoever. When we explain ourselves, we essentially list the results of each layer, but we can't explain the layers themselves. So if we make a computer that understands in layers like we do, and train each layer separately, then it could offer the same explanation that a human would, thus nullifying that argument.
There's a lot of "ifs" in there, which aren't justified.
Yes, all this is nice, but again, the question of liability remains stubbornly unanswered.
It can be freaking annoying when questions are asked and nobody cares to answer.
And yes, when humans are in a position of making important decisions, they ARE liable.
This definitely IS a pressing question, that anyone serious IS actually asking. Even, and I'd say, in particular, those that are actively using or working on AI systems! Just read the article. And many others. Even Musk, which uses AI every time he can, says that.
Currently Tesla is being deceitful. It lets people believe the cars are driverless, butifwhen there is an accident the responsibility is dumped on the driver.
And no, current AI is absolutely NOTHING like human intelligence.
Amazon has updated its Alexa voice assistant after it "challenged" a 10-year-old girl to touch a coin to the prongs of a half-inserted plug.
The suggestion came after the girl asked Alexa for a "challenge to do".
"Plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs," the smart speaker said.
In a scenario that's part "Robocop" and part "Minority Report," researchers in China have created an AI that can reportedly identify crimes and file charges against criminals. Futurism reports:
The AI was developed and tested by the Shanghai Pudong People's Procratorate, the country's largest district public prosecution office, South China Morning Post reports. It can file a charge with more than 97 percent accuracy based on a description of a suspected criminal case. "The system can replace prosecutors in the decision-making process to a certain extent," the researchers said in a paper published in Management Review seen by SCMP.
This is something laymen or NN/AI fanboys easily overlook. The results are maybe encouraging, but their target of human-like intelligence is really far away.
I don't understand how do we choose algorithms in Machine Learning. For Example I need to make model that identify which flower is on the plan t. google search show that we will need CNN algorithm but I don't understand why the CNN is only useful for this project
Those straw man arguments completely fail to respond to - let alone answer - the points made. Simply dismissing other peoples' points because you haven't bothered to consider their validity is very unimpressive.
That make you look like a TruFan zealot, without judgement.
Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)
Would they be happy if they were in a jurisdiction that automatically charged them with crimes (only a 3% error rate!)
I took this to read that 97% of charges resulted in successful prosecution.
Compare that to the UK, where the error rate is of the order of 20%:
https://www.cps.gov.uk/publication/cps-data-summary-quarter-1-2020-2021
(Last two quarters quoted had successful prosecution rates of 84% and 78%).
This sounds like a sensible way of doing things. Use the AI to decide when to take the case to court, then let the humans in the court make the final decision.
Same things can be said about humans.
Yet a researcher who is perfectly happy spending their entire career on once through classifiers/predictors goes along with everyone calling them AI researchers.
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest. Most of the field has been disingenuously named for decades.
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest. Most of the field has been disingenuously named for decades.
Much like Elon calling driver assist autopilot, they know exactly what they are doing and it's not being honest. Most of the field has been disingenuously named for decades.Aren't commercial aircraft autopilots basically equivalent to Level 2 (pilot must be ready to take control at any time) which is what Tesla's system is? I think the real problem is that the general public doesn't really understand what an aircraft autopilot does.
What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.
What Tesla's cars can do is already 10x, 100x, more advanced than any normal airliners autopilot. Not only navigating roads that are far far more complex than any air navigation route, but also dealing with other traffic, and pedestrians, and unexpected blockages.
That's the point. You can't claim they are exactly the same, when they are absolutely not. Autopilots for aircrafts do not have to implement obstacle avoidance, nor follow complex routes at the scale of less than 1 meter. Which are the very hard part of those cars' autopilots.
Oh, and avionics systems are designed and tested with stringent methods. Not quite the same level as automotive.
So, those car autopilots are much more complex indeed, designed with a bit easier regulatory frame, and using technology that we don't completely master. Yeah.
(Of course, on top of that, we can also mention that aircraft pilots are trained professionals, which your average Joe that can buy one of those cars isn't. And, he even never had any training, let alone exam, involving the autopilot function. That's a major issue. If anything, being legally authorized to drive a car with autopilot, should, IMO, require training and an exam, and be mentioned in your driver's license.)