Author Topic: Ex-cops are selling AI that decides you're lying when you call 911/999 for help  (Read 314 times)

0 Members and 1 Guest are viewing this topic.

Offline quinceTopic starter

  • Newbie
  • Posts: 8
  • Country: de
Saw this ProPublica story on Reddit. Are you ready for your industry to get invaded by AI-wielding cranks, convinced their dataset is flawless and their brain is a perfect lie detector? OpenAI et cetera have made it way too easy to do this.

Cranks, ex-cops, have been raking in the cash selling trainings that teach current cops they can tell whether an emergency number caller is lying about the crime they're reporting. They don't publish their data, they don't talk to reporters, they don't like to be talked about.

Now, these degenerate cranks want to get even richer renting out artificial intelligence models to do the same job for them at a massive scale. First text, then audio, then video, then anything.

Quote
Tracy Harpster, a deputy police chief from suburban Dayton, Ohio, was hunting for praise. He had a business to promote: a miracle method to determine when 911 callers are actually guilty of the crimes they are reporting. “I know what a guilty father, mother or boyfriend sounds like,” he once said.

Harpster tells police and prosecutors around the country that they can do the same. Such linguistic detection is possible, he claims, if you know how to analyze callers’ speech patterns — their tone of voice, their pauses, their word choice, even their grammar. Stripped of its context, a misplaced word as innocuous as “hi” or “please” or “somebody” can reveal a murderer on the phone.

So far, researchers who have tried to corroborate Harpster’s claims have failed. The experts most familiar with his work warn that it shouldn’t be used to lock people up.

Prosecutors know it’s junk science too. But that hasn’t stopped some from promoting his methods and even deploying 911 call analysis in court to win convictions.

"They Called 911 for Help. Police and Prosecutors Used a New Junk Science to Decide They Were Liars."  https://www.propublica.org/article/911-call-analysis-fbi-police-courts

"St. Pete startup uses AI to detect lies" https://stpetecatalyst.com/st-pete-startup-uses-ai-to-detect-lies/

One of the cranks' websites, bragging about the innocent people they've thrown in jail: https://www.statementanalysis.com/bio/

And their startup, complete with AI tld: https://www.deceptio.ai/

What's next? Feed the AI your social media profile and get an undesireable ranking?
« Last Edit: March 25, 2024, 06:13:44 pm by quince »
 

Offline mendip_discovery

  • Frequent Contributor
  • **
  • Posts: 851
  • Country: gb
Sorry you lost the moment you called ex police Cranks.

If it wasn't ex-police would you still be as angry?

Using AI to detect false claims could he helpful but the calls still need a response it just you can inform the people heading to that call that there maybe some deception behind it.

Prank calls take up valuable time and can cause big issues at times.

Insurance companies here in the UK are already using firms who get paid on finding out if you are making false claims. They use tools to measure stress levels of your voice etc. So it nothing new. It's just got the latest big business bingo word, gone away are the clouds and now we are on to AI.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14481
  • Country: fr
I don't think they need that shit to reject calls from people genuinely needing help. :popcorn:
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21688
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Sorry you lost the moment you called ex police Cranks.

If it wasn't ex-police would you still be as angry?

I'm not sure if you realize how poor the police are over here... note the article is in the US.

Not that it's necessarily representative of the "tool" / data set used here, specifically, but past experiments with AI decision tools have shown rampant racism and other biases; turns out, training them on human responses suffering the same biases, simply propagates those biases onward.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: quince


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf