Author Topic: Why don't surveillance cameras call the cops when a robbery is in progress?  (Read 9625 times)

0 Members and 1 Guest are viewing this topic.

Offline X

  • Regular Contributor
  • *
  • Posts: 184
  • Country: 00
    • This is where you end up when you die...
If the system can reliably identify intruders then logically it should just activate the machine gun. (castle law)

No need for the cops

 ;)
I would never trust a computer to reliably identify intruders and kill them, but I'm happy for the computer to smoke them out of the building:


This has been installed in a few buildings in Australia.
 

Offline CJay

  • Super Contributor
  • ***
  • Posts: 3547
  • Country: gb
Yeah, smokecloak is used here too, used to buy from a PC component supplier who had it, apparently it triggered one night when one of the directors brought some Friday night conquest back to the office with the intention of showing them his 'business acumen' on the boardroom table.
 

Offline Electro Detective

  • Super Contributor
  • ***
  • Posts: 2713
  • Country: au
More evidence how cheap some  'directors' are   :--

Good news the prick got smoked out, the lady can find better action elsewhere  :-+
 

Offline Kilrah

  • Supporter
  • ****
  • Posts: 1857
  • Country: ch
Smoke screens are one thing that's very efficient but for some reason already crosses the acceptance threshold of most... had a friend who tried to distribute some, but regardless of all the demos he did nearly everyone dismissed it. Too expensive, too "complicated", "meh i don't care, there's insurance"...
 

Offline CJay

  • Super Contributor
  • ***
  • Posts: 3547
  • Country: gb
They're very effective and can be mandated by insurance companies, especially if you've had high value prior claims but I believe some leave a residue and are a PITA to clean up after.

 

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3108
  • Country: fr
If you want to automate suspect behaviour, why not automate prediction of crime?

Using databases and profiling you can ANTECIPATE a crime before it happens and effectively stop the criminal. Don't recall the name of the movie with Tom Cruise that explores this theme.

No. Absolutely not, that's way too far down the thought crime route.

And, in fact, this is exactly what is being done. Crime prediction software that tells where to send more patrols based on the past records of arrests and crime is in routine use already.

https://www.predpol.com/

Now the issue with it is that it is only as good as its data - so if you have a racist, corrupt cops overpolicing a black/poor/immigrant neighborhood and turning blind eye to crime elsewhere, guess what happens when that data gets fed into the software - "Quarter XY has 75% more arrests than everyone else, so we need to send much more police there!", perpetuating (or even worsening) the situation. This effect of "washing" the institutional biases through the statistics and "AI" software has been also thoroughly documented.

Quote from: NANDBlog
OK, so imagine this. This is in USA. Robbery going on, camera warns the police department, they go on the scene, robber pulls out a gun, barricades themselves. Instead of a robbery, and some few thousand dollar ...

That's BS - when someone picks a phone up and calls the cops "manually" about a robbery (or whatever crime) being in progress, you think they won't show up? What is the difference?

If what you are saying was true, why would police ever respond to something like a terrorist attack or mass shooting? The robber will most likely start to run away when seeing cops, most aren't suicidal and the chances of winning a standoff with the police are pretty much zero. OTOH a terrorist or a nut with a gun are pretty much guaranteed to shoot - and yet the police responds, even though the chances of getting killed are much higher.


To @JoeN:

Despite the enormous hype in media about the advance of AI, do keep in mind that all that is going on right now is one field of AI (machine learning) being used as a hammer on every possible problem.

Even if we had some sort of neural network or some other magical algorithm that would be perfectly able to identify when a robbery is going on as opposed to people talking or having an argument (for ex.), it still wouldn't do anything about the fact that surveillance camera images generally suck badly, even humans have often problem to interpret what is going on in them. And that isn't really changing, especially when it comes to night time scenes.

Humans also tend to err on the side of caution and will use a common sense and their experience when judging when a situation warrants to call the cops. It is not perfect, but it acts as a huge filter for BS results. No machine learning algorithm has anything like that - all they do is using more or less sophisticated math and various heuristics to match the input against what they have seen before.

There isn't anything new there that hasn't been done before, all that is happening is that the availability of huge datasets and computational resources enables to build systems that were not possible before. But there hasn't really been any major breakthrough in understanding the scenes the algorithms "see" or that the algorithms gained some fundamentally new ability they didn't have before.
 
The following users thanked this post: jpc

Offline SeanB

  • Super Contributor
  • ***
  • Posts: 15390
  • Country: za
Smokescreen not used much here, there is a growing use of a pepper spray system, which is very effective. No burglary possible if you cannot breathe, and cannot see due to watering eyes.  plus it makes for good crimewatch footage as well, which probably is the best advertising the sellers have, of places being broken into, but the thieves not getting loot, and being arrested at the local ER with severe inhalation issues.

Most robberies the cheapest thing stolen is the actual goods, the biggest cost is replacing broken glass, smashed doors and such, and then adding yet more razor wire, more electric fencing and stronger security bars. Only ones making out like bandits are those suppliers of razor wire ( I buy local, they offer a full service supply of all you need, from the poles to the wire and even will install at a price), the glaziers and of course the police with the " here is the OB number for your crime, do you want us to investigate it, please contact your insurance, I feel the need for some ' cooldrink' coming".
 

Offline Red Squirrel

  • Super Contributor
  • ***
  • Posts: 2459
  • Country: ca
If the system can reliably identify intruders then logically it should just activate the machine gun. (castle law)

No need for the cops

 ;)
I would never trust a computer to reliably identify intruders and kill them, but I'm happy for the computer to smoke them out of the building:


This has been installed in a few buildings in Australia.

That's a neat idea.

Replace with CO2 and that robber is never going to try robbing a bank again.  :P
 

Online vodka

  • Frequent Contributor
  • **
  • Posts: 525
  • Country: es
Smokescreen not used much here, there is a growing use of a pepper spray system, which is very effective. No burglary possible if you cannot breathe, and cannot see due to watering eyes.  plus it makes for good crimewatch footage as well, which probably is the best advertising the sellers have, of places being broken into, but the thieves not getting loot, and being arrested at the local ER with severe inhalation issues.

Most robberies the cheapest thing stolen is the actual goods, the biggest cost is replacing broken glass, smashed doors and such, and then adding yet more razor wire, more electric fencing and stronger security bars. Only ones making out like bandits are those suppliers of razor wire ( I buy local, they offer a full service supply of all you need, from the poles to the wire and even will install at a price), the glaziers and of course the police with the " here is the OB number for your crime, do you want us to investigate it, please contact your insurance, I feel the need for some ' cooldrink' coming".

If you believe that with  an electric fence and razor wire  you will stop the burglars. You have it crude crude. They could rob a car and clash versus electric fence or could cut the fence with portable  grinder  or shears.
Here , the copper burglars steal the  wires of catenaries with portable grinder wrapped to long  wooden stick  .
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 3496
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Copper thieves rarely steal railway overhead line. They don't dare to touch high voltage DC yet.
However, they do steal the control cables weekly. A lot of effort is being put in replacing those long cables for fiber.

But ripping all gas pipes from office buildings at night is nothing new.
 

Offline NANDBlog

  • Super Contributor
  • ***
  • Posts: 4645
  • Country: nl
  • Current job: ATEX certified product design
Copper thieves rarely steal railway overhead line. They don't dare to touch high voltage DC yet.
However, they do steal the control cables weekly. A lot of effort is being put in replacing those long cables for fiber.

But ripping all gas pipes from office buildings at night is nothing new.
You would think. Statistics from Hungary. There was a theft in Budapest in broad daylight few years ago. They stole about 100m overhead cable on a tram line which transports half a million people a day, and has trams every 60 seconds or so. Statistics said, they stole 13.5 km of that wiring per year. Train company said, 460 times a year did they find out that cabling was stolen. As I understand, they tried stealing from the 750KV wiring at some point.
Absolutely ridiculous. They changed the law a few years ago, so smelteries cannot accept wiring anymore, only if you have some papers with it. That reduced cable theft by some 75%, continues to decline. It is very easy to fight this.
 

Offline X

  • Regular Contributor
  • *
  • Posts: 184
  • Country: 00
    • This is where you end up when you die...
That's a neat idea.

Replace with CO2 and that robber is never going to try robbing a bank again.  :P
Hopefully there won't be any false alarms. But why kill the robber immediately? Just use a highly irritating gas (eg. fumes from cheap Chinese glue) to give him cancer as he is being smoked out.  >:D

Smoke screens are one thing that's very efficient but for some reason already crosses the acceptance threshold of most... had a friend who tried to distribute some, but regardless of all the demos he did nearly everyone dismissed it. Too expensive, too "complicated", "meh i don't care, there's insurance"...
The standard attitude towards security in general:
ACT I: THE FIRST MISTAKE
OWNER: "Our own abode of trade doth be well protected, I say."
ACT II: A CALL TO ELECTRIC ARMS
OWNER returns to his business, to find a cracked window and damaged/missing goods.
Enter SECURITY FIRM.

OWNER: "I say, a delinquent hath engaged in larceny. My dear security firm, what price must I pay, for cameras, robots, lasers, and a scarecrow that maketh mine enemies afeard?"
(SECURITY FIRM and OWNER walk around all over site)
SECURITY FIRM: "Alas, a hefty price,my liege. Seven hundred shekels of silver, and another fifty."
OWNER: "What?! Why such preposterous pricing?!"
SECURITY FIRM: "Your majesty, thou can get away with the cameras. Just a hundred shekels."
OWNER: "Nay! I hath better not. I shall instill fear into the hearts of delinquents by hanging manly artifacts on mine windows."
SECURITY FIRM: "Your Excellency hast been advised."
ACT III: HMMMPPH
OWNER: "Hmmmpph!"
SECURITY FIRM: "Hmmmpph!"
EVERYONE ELSE: "Hmmmpph!"
OWNER'S DOG: "Hmmmpph!"
EXEUNT

They're very effective and can be mandated by insurance companies, especially if you've had high value prior claims but I believe some leave a residue and are a PITA to clean up after.
Never heard of insurance companies mandating this, but I think some provide bonuses such as lowered premiums for properties with specific security measures.

And, in fact, this is exactly what is being done. Crime prediction software that tells where to send more patrols based on the past records of arrests and crime is in routine use already.

https://www.predpol.com/

Now the issue with it is that it is only as good as its data - so if you have a racist, corrupt cops overpolicing a black/poor/immigrant neighborhood and turning blind eye to crime elsewhere, guess what happens when that data gets fed into the software - "Quarter XY has 75% more arrests than everyone else, so we need to send much more police there!", perpetuating (or even worsening) the situation. This effect of "washing" the institutional biases through the statistics and "AI" software has been also thoroughly documented.
I see no issue with an AI being programmed to focus its monitoring efforts on areas which are statistically shown to require a higher police presence. Police do this to save resources anyway, so all they'll be doing is feeding it into the AI. In practice the AI doesn't do the policing, the police have to expend resources, so they want to achieve this in the most efficient way possible. And if more police are sent to "black/poor/immigrant neighbourhoods" because these areas are shown to have a significantly higher crime rate per head, then that's how it should be. The AI predicts crime based on a variety of parameters, and statistical and known history will naturally influence this prediction.

The robber will most likely start to run away when seeing cops, most aren't suicidal and the chances of winning a standoff with the police are pretty much zero. OTOH a terrorist or a nut with a gun are pretty much guaranteed to shoot - and yet the police responds, even though the chances of getting killed are much higher.
Some shoplifters will even do this.

Even if we had some sort of neural network or some other magical algorithm that would be perfectly able to identify when a robbery is going on as opposed to people talking or having an argument (for ex.), it still wouldn't do anything about the fact that surveillance camera images generally suck badly, even humans have often problem to interpret what is going on in them. And that isn't really changing, especially when it comes to night time scenes.
You should look at footage from modern-day IP cameras. Even the no-name Chinese brands out there have good image quality compared to the "analog" equivalents, and the cost of IP-based CCTV installations have recently dropped to the point where they are accessible even to those on a tight budget.

AI already exists that can identify patterns of certain behaviours, but this are generally used with surveillance cameras in the order of 20 megapixels or above. The frame rate of such cameras is not the best (presently around 6fps), but this is going to get better with time, and surveillance applications usually don't need the same frame rate as a Hollywood blockbuster anyway.

Humans also tend to err on the side of caution and will use a common sense and their experience when judging when a situation warrants to call the cops. It is not perfect, but it acts as a huge filter for BS results. No machine learning algorithm has anything like that - all they do is using more or less sophisticated math and various heuristics to match the input against what they have seen before.

There isn't anything new there that hasn't been done before, all that is happening is that the availability of huge datasets and computational resources enables to build systems that were not possible before. But there hasn't really been any major breakthrough in understanding the scenes the algorithms "see" or that the algorithms gained some fundamentally new ability they didn't have before.
Calling the police over suspected armed robberies may not even be the best issue to tackle using AI, which seems better suited to identifying potentially malicious series of actions occurring over a period of time. One issue particularly in large organisations, is staff pilfering (staff stealing products or finances, fraud caused by staff, etc). In many CCTV installs, there is often at least one camera viewing the till, and staff are denied administrative access to the CCTV system (though they are granted enough to back up footage) for this exact reason. I would be very interested to see a video analytics AI algorithm developed to identify suspicious behaviour, since this kind of thing is a lot more subtle and difficult to notice than a robbery.

Smokescreen not used much here, there is a growing use of a pepper spray system, which is very effective. No burglary possible if you cannot breathe, and cannot see due to watering eyes.  plus it makes for good crimewatch footage as well, which probably is the best advertising the sellers have, of places being broken into, but the thieves not getting loot, and being arrested at the local ER with severe inhalation issues.

Most robberies the cheapest thing stolen is the actual goods, the biggest cost is replacing broken glass, smashed doors and such, and then adding yet more razor wire, more electric fencing and stronger security bars. Only ones making out like bandits are those suppliers of razor wire ( I buy local, they offer a full service supply of all you need, from the poles to the wire and even will install at a price), the glaziers and of course the police with the " here is the OB number for your crime, do you want us to investigate it, please contact your insurance, I feel the need for some ' cooldrink' coming".
It is important to understand that the primary purpose of a security defence is not to cause harm to miscreants, but to protect something against them. In the case of burglars, try to achieve this by disorienting them and wasting their time. Burglars don't like (and often cannot afford) to have their time wasted, because this increases the chance that they will be caught by either civilians or police. The more you waste the burglar's time, the better the outcome for you.
Of course, harming burglars in the process wastes more time by forcing them to tend to their injuries, and can waste even more of their time when they are immobilised as a result, however such harm should not be the primary motive/objective behind a security system or defence. The primary objective of a security system must always be to protect something.

Unfortunately (at least in Australia) I can see a future where even remotely advanced security systems are outlawed. With idiotic, harmful, unrealistic and over-the top anti-discrimination laws, health and safety regulations, gun controls, and limits on how people can defend themselves and their property, the insanely evil and the incurably stupid are well protected and compensated, while the real victims get screwed over badly. It is likely that even more laws will be introduced to protect stupidity, and security systems of the future will just file a record in an event log and do nothing else. Things aren't looking too good for the security industry here, yet alone a future of cameras calling police, and robbery-identifying AI.

We have most of the components to implement the tech, but good luck doing that without a lawsuit from an armed robber who suddenly turns up to court in a business suit and has a lifetime supply of crocodile tears. There are many situations where this sort of thing has happened, and owners have been annihilated in court as a result.
« Last Edit: May 10, 2017, 04:59:36 pm by X »
 

Offline CJay

  • Super Contributor
  • ***
  • Posts: 3547
  • Country: gb
give him cancer as he is being smoked out.  >:D

I hope you or anybody else in your life ever has to deal with cancer. I've not been so fortunate and you're a callous dickhead.
 
The following users thanked this post: SeanB

Offline janoc

  • Super Contributor
  • ***
  • Posts: 3108
  • Country: fr
I see no issue with an AI being programmed to focus its monitoring efforts on areas which are statistically shown to require a higher police presence. Police do this to save resources anyway, so all they'll be doing is feeding it into the AI. In practice the AI doesn't do the policing, the police have to expend resources, so they want to achieve this in the most efficient way possible. And if more police are sent to "black/poor/immigrant neighbourhoods" because these areas are shown to have a significantly higher crime rate per head, then that's how it should be. The AI predicts crime based on a variety of parameters, and statistical and known history will naturally influence this prediction.

Are the "black/poor/immigrant neighbourhoods" areas with significantly higher crime rates such because those people are really more prone to be criminals or because those areas are overpoliced, where you can get arrested even for trivial stuff that is ordinarily let slide elsewhere? Especially if you take into account the "broken windows" policing doctrine (= harshly punishing everything, no matter how tiny and insignificant, with the idea that it will discourage more serious crime) that has been widely discredited but is still widely applied.

Now take such data and feed them into a neural network or some other machine learning system. The problem is that if you train that AI with a history of crime that is based on pre-existing biases (immigrant/poor/black/muslim/whatever ...), you will actually only aggravate the situation. Basically a machine-reinforced confirmation bias.

http://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0

It actually gets worse - there are similar software black boxes that will evaluate your likelihood to re-offend and suggest sentence to the judges. Guess who gets hit with much tougher sentences for the same crimes.

There is a good book on why these black box models where we don't know how the system actually arrives to the result a huge problem:
https://weaponsofmathdestructionbook.com/


AI already exists that can identify patterns of certain behaviours, but this are generally used with surveillance cameras in the order of 20 megapixels or above. The frame rate of such cameras is not the best (presently around 6fps), but this is going to get better with time, and surveillance applications usually don't need the same frame rate as a Hollywood blockbuster anyway.

These behavior-based approaches have been widely discredited as an ineffective woodoo, unfortunately they are still being sold by the technology vendors. Such a system would be good to alert the operator that something is going on at best, otherwise it produces way too many false positives.

Calling the police over suspected armed robberies may not even be the best issue to tackle using AI, which seems better suited to identifying potentially malicious series of actions occurring over a period of time. One issue particularly in large organisations, is staff pilfering (staff stealing products or finances, fraud caused by staff, etc). In many CCTV installs, there is often at least one camera viewing the till, and staff are denied administrative access to the CCTV system (though they are granted enough to back up footage) for this exact reason. I would be very interested to see a video analytics AI algorithm developed to identify suspicious behaviour, since this kind of thing is a lot more subtle and difficult to notice than a robbery.

Actually most applications of CCTV are for ex-post investigations - once something happens the footage is actually reviewed.

And re behavior based stuff:

http://www.law.uchicago.edu/files/files/Harcourt%20OpEd%20Behavioral%20Profiling%20Longer%20Version.pdf
https://www.rt.com/usa/tsa-behavior-profiling-program-685/

And that is done by humans - it is very unlikely that some sort of AI would be performing any better than that if even humans have difficulty to predict the problem correctly than a random chance.

[/quote]
 

Offline yada

  • Frequent Contributor
  • **
  • !
  • Posts: 265
  • Country: ca
Like I said, the point is that the algorithm ...

Algorithm? What algorithm is going to recognize a robbery? I can understand a vision recognition system that could notice when people are moving in a room, but how does it know it's a robbery? You're talking about AI, the kind we just don't have yet.  :-//

All you really have then is a basic alarm system that notifies {somebody} when the room is occupied because it sees motion. That sort of thing has been around for years - it's called an alarm system.

You could make it recognize a gun or an out stretched arm with a hand gun in it, or say someone jumping over the counter. Even better most employees wear uniforms so it could look for non uniforms behind the counter. Couple all this with machine learning and I would think its doable. They have cameras on the highways in the US that record and read the licenses plates of *every* single car or truck that comes within so many miles of Washington DC. That's a little diagonal box moving at 100 km/h with all kinds of different frames and levels of grime on them all moving at different angles.
 

Offline xrunner

  • Super Contributor
  • ***
  • Posts: 4779
  • Country: us
  • hp>Agilent>Keysight>?
You could make it recognize a gun or an out stretched arm with a hand gun in it,

Maybe the person is buying a Slim Jim and handing it toward the clerk? That's gonna look a hell of a lot like the end of a pistol.

Quote
or say someone jumping over the counter.

Maybe that's how the clerk gets back behind the counter?

Quote
Even better most employees wear uniforms so it could look for non uniforms behind the counter.

Not universally worn though. Also what happens when the POS (point of sale) technician goes back there to work on the system and he doesn't have a uniform?

Quote
Couple all this with machine learning and I would think its doable.

No - as you can see now it's not reliably possible with today's technology IMHO. You need the perception capabilities of a human level AI. As I said, I believe that will happen, but not anytime soon.
I am a Test Equipment Addict (TEA) - by virtue of this forum signature, I have now faced my addiction
 

Offline X

  • Regular Contributor
  • *
  • Posts: 184
  • Country: 00
    • This is where you end up when you die...
I hope you or anybody else in your life ever has to deal with cancer. I've not been so fortunate and you're a callous dickhead.
You are not special, and you will not gain sympathy from me with name-calling and personal attacks. If you are offended, deal with it yourself. I rarely go out of my way just to offend people, but I am not interested in sugar-coating and censoring myself just to cater to those who might be offended by something I say, especially if it isn't aimed at them.

Are the "black/poor/immigrant neighbourhoods" areas with significantly higher crime rates such because those people are really more prone to be criminals or because those areas are overpoliced, where you can get arrested even for trivial stuff that is ordinarily let slide elsewhere? Especially if you take into account the "broken windows" policing doctrine (= harshly punishing everything, no matter how tiny and insignificant, with the idea that it will discourage more serious crime) that has been widely discredited but is still widely applied.

Now take such data and feed them into a neural network or some other machine learning system. The problem is that if you train that AI with a history of crime that is based on pre-existing biases (immigrant/poor/black/muslim/whatever ...), you will actually only aggravate the situation. Basically a machine-reinforced confirmation bias.

http://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html?_r=0

It actually gets worse - there are similar software black boxes that will evaluate your likelihood to re-offend and suggest sentence to the judges. Guess who gets hit with much tougher sentences for the same crimes.
Knowing how police departments typically operate, and having seen the shoddy state of many immigrant/muslim/poor areas, it is very unlikely that any over-policing is responsible for the higher crime rate, so it's more likely that these areas are just more prone to criminal activity. It's not "racism" or "confirmation bias" to acknowledge and utilise facts and correlations such as this, and despite what divisive "minority" groups like BLM want everyone to believe, it in't all the state's fault they are over-represented in crime statistics.

As for harsh punishments, what would you consider to be a minor/trivial crime (other than the obvious ones like smoking pot and saying things that offend others)? An incident can erupt from what is meant to be a response to a minor misdemeanor, but was escalated to the point where the offender now has to be restrained with a large amount of force, or they have a very bad rap. I see nothing wrong with imposing a harsher sentence to someone who is more likely to re-offend, so long as this is done fairly and without unjustified prejudice.

There is a good book on why these black box models where we don't know how the system actually arrives to the result a huge problem:
https://weaponsofmathdestructionbook.com/
And therefore, any such AI/neural network learning system should not be used in legal proceedings against anyone, without the output being scrutinised by appropriate human personnel. If the output is used without enough scrutiny, there is a serious issue with how that jurisdiction operates. I do share your concern to some extent, as I can see the potential for dependence on AI in legal matters to escalate beyond control.

These behavior-based approaches have been widely discredited as an ineffective woodoo, unfortunately they are still being sold by the technology vendors. Such a system would be good to alert the operator that something is going on at best, otherwise it produces way too many false positives.
Sure, I'm not saying they are effective or useful, but that they exist. In reality, high res cameras are mostly useful for identifying people in a crowd as opposed to identifying any behaviour. But as you say, many security tech companies love touting their stuff and pretending that it's useful. It's a major issue in the industry, the "oh everyone else are just dinosaurs, you need OUR tech because it's NEW!" This is what confuses people (including legislators) and results in a mess of others (including those in office) having somewhat of an aversion to security systems. It's a dream for some clients who get their security advise from watching Hollywood crime blockbusters and top-secret agent policing fiction TV shows.

Actually most applications of CCTV are for ex-post investigations - once something happens the footage is actually reviewed.
Yes, video analytics are still used in this case, to search for the object or evidence one may be looking for. Even simple analytics such as object searches based on visible movement.

And re behavior based stuff:

http://www.law.uchicago.edu/files/files/Harcourt%20OpEd%20Behavioral%20Profiling%20Longer%20Version.pdf
https://www.rt.com/usa/tsa-behavior-profiling-program-685/

And that is done by humans - it is very unlikely that some sort of AI would be performing any better than that if even humans have difficulty to predict the problem correctly than a random chance.
Of course you still need a human, all the video analytics AI does is make the job easier. There will be false alarms present that the humans have to be around to filter. I am not suggesting the AI should be the be-all and end-all in video analytics, but it can reduce the time an investigation takes.

In most applications of CCTV, such advanced behaviour-identifying analytics aren't even necessary, as the system is composed of many "low" resolution cameras in specific positions, and the location and rough time and location of occurrence is often already known. The only thing the video analytics does here is help the operator find what he is looking for without spending a lot of time sifting through footage manually.

It is concerning how advanced this can get, knowing that the dependence on technology will increase, and even failure modes that are supposed to be non-essential will result in significant losses.
« Last Edit: May 11, 2017, 03:37:01 am by X »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf