Author Topic: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?  (Read 1120 times)

0 Members and 1 Guest are viewing this topic.

Offline EEVblogTopic starter

  • Administrator
  • *****
  • Posts: 37742
  • Country: au
    • EEVblog
eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« on: March 08, 2023, 05:54:42 am »
Will the human perceived quality of the AI ChapGPT output eventually reach an inflection point where the quality decreases because there is now too much AI produced content to learn from?

 

Offline SL4P

  • Super Contributor
  • ***
  • Posts: 2318
  • Country: au
  • There's more value if you figure it out yourself!
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #1 on: March 08, 2023, 08:34:10 am »
As long as AI (in its presented form of ‘Guided AI’), is constrained by commercial or practical hardware resources- yes, it will never reach ‘sentence’, or even independence.

This might be a good thing, but in reality, I suspect true unconstrained AI will make better decisions and propositions than humans will in the same timeframe.

Humanity watch out, you’d better raise your game.
Don't ask a question if you aren't willing to listen to the answer.
 

Offline AndyC_772

  • Super Contributor
  • ***
  • Posts: 4228
  • Country: gb
  • Professional design engineer
    • Cawte Engineering | Reliable Electronics
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #2 on: March 08, 2023, 10:14:34 am »
The quality of its output may fall, because it's learned from its peers, who have published their own output on public sites without any kind of independent review to check for quality, accuracy, or freedom from unjustified bias...?

This differs from the average human how exactly?

Online RoGeorge

  • Super Contributor
  • ***
  • Posts: 6207
  • Country: ro
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #3 on: March 08, 2023, 10:45:15 am »
I think AI content will need to be marked as such, same as the "automated email, don't reply", or the "this is a record" kind of marking, or else it will flood any human produced content.

Otherwise it will turn into a pest, flooding human generated content very fast same as spam emails flooded our inboxes, or same as marketing blurb flooded the technical specs.
« Last Edit: March 08, 2023, 10:48:51 am by RoGeorge »
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1210
  • Country: pl
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #4 on: March 08, 2023, 10:22:38 pm »
A learning data set is not garbage slurped indiscriminitively from the web. The size does not permit manual filtering, but some sources and entire classes of inputs are rejected. The output is also evaluated and fine adjustements are made to obtain desired results. That has been the case with ChatGPT, as it has been for other models. Eliminating generated content is even simpler than ensuring quality of human-produced data, because that can be delegated to classifier models.(1) This is not a trivial task, but dealing with “freerange, 100% natural text” is even worse.

I also have trouble with the term “quality loss”. I do not like nebulous concepts, when they are the very foundation of the discussion. What does it really mean? Gibberish text? Invalid information? Poor style? Useless talk? Inability to control outputs? Each of these is different. I feel like an ignostic asked about a god.


(1) Those could be vulnerable to intentional, large scale attack. But I assume this is not the scenario, you are talking about. The scenario itself is also a muddy topic.
People imagine AI as T1000. What we got so far is glorified T9.
 

Online Bud

  • Super Contributor
  • ***
  • Posts: 6913
  • Country: ca
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #5 on: March 09, 2023, 01:13:49 am »
I predict a different sort of a problem. That is: as humans will rely on AI more and more, humans will become more and more dumb and their ability to learn will drop. This may mean an equilibrium may be achieved causing stagnation that then may cause who knows what problems.
Facebook-free life and Rigol-free shack.
 

Online Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6265
  • Country: fi
    • My home page and email address
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #6 on: March 09, 2023, 01:39:13 am »
One data point:

I am a voracious sci-fi reader.  Well, used to be.  I also used to listen to Sci-Fi audiobooks on YT (and still do, for LibriVox ones; read by Phil Chenevert for example).

In the last year or so, there has been a sudden onslaught of, well, crappy Sci-Fi audio, on Youtube.  Some authors are even proud they use ChatGPT to "flesh out their ideas".  (Note: I found that out only after I found their stories too predictable, less than enjoyable.)

This means that the amount of content will increase, but the fraction of quality content will decrase.

I expect the same to be true for all fields where ChatGPT and similar are used, because they create nothing, and generate "new" content based on extrapolating from already existing content.

If someone loves serials describing the exploit of loved familiar personae, be that Sci-Fi or Soap Operas, they will love it.

For those of us who crave new stories, takes and insight, it just means more shit to wade through in an effort to find the rare gold nugget.

(Even moreso now that Sci-Fi awards have become Social Justice awards, given to those who most prominently support LGBTQI++ heroes and their struggles against oppressive patriarchies.)



I see no difference between "AI" –– ChatGPT and other language-based models –– and having a free slave army of near-idiots copying stuff off the web.
 

Offline golden_labels

  • Super Contributor
  • ***
  • Posts: 1210
  • Country: pl
Re: eevBLAB 108 - Will AI Quality Eventually Destroy Itself?
« Reply #7 on: March 09, 2023, 03:13:21 am »
I predict a different sort of a problem. That is: as humans will rely on AI more and more, humans will become more and more dumb and their ability to learn will drop. This may mean an equilibrium may be achieved causing stagnation that then may cause who knows what problems.
Is there anything to drop? /s
People imagine AI as T1000. What we got so far is glorified T9.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf