Author Topic: Why offtopic is going to hurt us all...  (Read 3937 times)

0 Members and 1 Guest are viewing this topic.

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21077
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #50 on: October 07, 2024, 09:39:51 am »
It suggests that there is more to human learning than exposure to training material.  For LLMs, exposure to training material is it.

That argument does not seem logical to me. All human learning is based on outside stimuli (training material) as well. What you described before is related to the way that training material is processed internally -- the mind looking for patterns and "rules" which it can apply to similar problems.

Whether or not an AI model is capable of the same internal processing, looking for patterns it can generalize, it totally independent from the fact that it is exposed to training material. I don't see why that should not be possible, and would expect that it already happens to some extent in today's LLMs. It's an interesting challenge to think up some experiments to prove or disprove that!
The difference between human learning and machine learning is very clear to me. When you try to feed new information to a human, the human will first evaluate this information assigns a value to it, discriminates. A LLM will treat all information equally. So humans assign value on the forward path, a feedback based learning LLM on the feedback path.
This means that humans are effected with different issues when learning.
For example when a human is meeting disinformation, they might completely ignore it. Or if they received misinformation their entire life, they might ignore the truth. The machine will integrate both if the training data contains misinformation.
Also LMM's suffer from human interaction on the input. You ask google's geminy to show you a fireman, and the LMM is actually asked to show you a "fireman and woman of diverse ethnic background". Or it calls you a racist in a thousand words. So when asking a LLM coming from a company, you have to evaluate the company first if it's trying to push you some agenda. Same with people BTW. There are people that consistently will give you misinformation in their responses.

The other absolutely key point is that you can ask a person why they chose their course of action. That is still - after 40 years - an LLM "active research topic". Translation: good people have repeatedly tried and failed.

It smells of the attitude which lead to the HP/Intel Itanic debacle. There the hardware people presumed and required that the compilers would become able to generate good code for EPIC machines. HP/Intel was warned that many people had tried and failed, but chose to ignore the warnings. Oops.

At least EPIC compiler failures didn't/couldn't end with people being incarcerated, run over, denied medical treatment, denied mortgages etc etc.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline ebastler

  • Super Contributor
  • ***
  • Posts: 7345
  • Country: de
Re: Why offtopic is going to hurt us all...
« Reply #51 on: October 07, 2024, 09:40:45 am »
The logic is simple.  There is no specific outside stimuli of "foots" or "beet", so these words were never learnt via replicating outside stimuli.  A different learning process must've been involved, one that LLMs are currently incapable of.

The fact that you don't see "foots" and "beet" in LLM output just means that (in this particular respect) the model has gone beyond the toddler stage of learning, and has already picked up the correct words.

As mentioned above, it would be an interesting challenge to think up questions to an LLM which try to check whether it does its own inference and generalisation. These would have to be about "fresh" topics which the LLM can't answer based on its factual training.The "foots" and "beet" observation is not suitable in that respect.
 

Offline Andy Chee

  • Super Contributor
  • ***
  • Posts: 1363
  • Country: au
Re: Why offtopic is going to hurt us all...
« Reply #52 on: October 07, 2024, 09:45:13 am »
it would be an interesting challenge to think up questions to an LLM which try to check whether it does its own inference and generalisation.
That sounds like the making of an echo chamber feedback loop.

Indeed this is what human conspiracy theorists do!
 

Online Xena E

  • Frequent Contributor
  • **
  • Posts: 596
  • Country: gb
Re: Why offtopic is going to hurt us all...
« Reply #53 on: October 07, 2024, 09:45:48 am »
My problem with Gurggle is that every 'original' non suggested search I do is met with a friggin' captcha screen.
Really?
How are you doing it? ;D
Never happened to me.

I contacted Google recaptcha support:

Here's what came back:

Quote
"Your computer or network may be sending automated queries. To protect our users, we can't process your request right now. For more details visit our help page"
... total wank.
 

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 8169
  • Country: nl
  • Current job: ATEX product design
Re: Why offtopic is going to hurt us all...
« Reply #54 on: October 07, 2024, 09:55:24 am »
The difference between human learning and machine learning is very clear to me. When you try to feed new information to a human, the human will first evaluate this information assigns a value to it, discriminates. A LLM will treat all information equally. So humans assign value on the forward path, a feedback based learning LLM on the feedback path.
This means that humans are effected with different issues when learning.
For example when a human is meeting disinformation, they might completely ignore it. Or if they received misinformation their entire life, they might ignore the truth. The machine will integrate both if the training data contains misinformation.

I think you are underestimating LLMs. In my experience they do place the information they have gathered in the context where it occurred, and can make proper use of that context when working with the information. They don't "treat all information equally".
Humans do that for the LLMs. It's called data set labelling or annotation. Like marking pictures of animals which animal it is.
For these LLMs it's possible they use parts of the LLM to label the info automatically, but it needs a source. The smaller the source, the worse the interpretations the AI makes. Some of the reason we get garbage output is because they automated these processes (to save money) and there is no oversight.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21077
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #55 on: October 07, 2024, 10:08:58 am »
I would not use an LLM for anything that involved life safety or where the failure of that LLM's output led to significant economic harm to my employer/clients.   

Good. But not relevant to whether LLMs hallucinate bullshit/rubbish.

But that won't stop other people/companies. Even without LLMs, they hide scrutiny of their product's output under the veil of commercial secrecy. LLMs fit that behaviour perfectly. (Examples: US courts "should this accused/convicted person be put in jail")

LLMs should be avoided until they can indicate why they emitted a result. That's been a problem for 40 years, and is still a "active research topic". Igor Aleksander's 1983 WISARD, effectively the forerunner of today's LLMs, demonstrated a key property of modern LLMs: you didn't and couldn't predict/understand the result it would produce, and they can't indicate their "reasoning". WISARD correctly distinguished between cars and tanks in the lab, but failed dismally when taken to Luneberger Heath in north Germany. Eventually they worked out the training set was tanks under grey skys and car adverts under sunny skies.

Different fonts, anyone?

LLMs give The Answer, and the lazy/ignorant won't question that.

It's possible to test the output of an LLM to the point where you can have very high confidence in the result.  This is what researchers are actively studying with the likes of o1 for instance.  Any fuzzy model will not have a guaranteed 'truthiness' to it, but the larger the dataset and the larger the test, you can gain progressive confidence in the accuracy of the model.

Not true.

If you add a new training example to an LLM, it can easily and silently cause previously working examples to fail. Not good if an overnight update causes a driverless car to behave differently at a busy road junction :(

Basic problem: there is no way of predicting the envelope in which an LLMs results are acceptable. A new test example may change the envelope significantly, and you won't know it until the LLM fails.

Quote
Once again, comparing what a neural network was capable of in 1983 to what it is capable of now is just wrong and misleading.  We've been over this.

And, LLMs can now show a reasoning path, which is unique to neural networks as far as I am aware.

References, please.

The differences between the 80s and now are scale, not kind. The problems still remain.

Quote
Conventional search gives a Set (or Bag) of related answers, and you must examine and select them.

Conventional search gives the opportunity for averse agents to alter the input data (e.g. keyword hacking) to make their results more prominent.  Conventional search cannot distinguish between truth and fiction either.  Any user of GPT or search would do well to perform a sanity check - e.g. by checking against the competing engine, or just a general feeling of "sounds-right-ism".

Sure, search results are manipulated (incognito mode helps!), and the multiple results need to be evaluated.

Common sense isn't common, and people/corporations love to use the excuse "because the computer/LLM says so". Plainly impossible to resort to that with multiple results from conventional search engines.

Quote
...and know that the memory safety of your list is guaranteed.  The list will be free'd when it is no longer used.  No risk of double freeing or pointer errors.  No need to manually refcount.

Yeah, Soustroup is bleating such pious hopes. While strictly true, it ignores the elephants in the room: other people's libraries, existing code, and developer frailty. Few people are listening.

Quote
I'll assume you're hinting towards Rust there.   

Newer languages (including Rust) start by presuming multiprocessor and NUMA systems.

C (and therefore C++) started by explicitly leaving such problems to the libraries, but not providing the language guarantees that would enable such libraries to be created (without relying on explicit knowledge of the target architecture). C and C++ have added sticking plasters and duct tape; not something you want to see on a critical system.

Whether Rust will be successful remains to be seen. I suspect it will supplant many traditional C applications, but that will take a generation to happen. C and C++ will remain, just as COBOL remains.

Quote
I have not used Rust enough to be sure of it, but I get the feeling that it could eventually replace C++.  I've used  Objective-C and didn't much like it because it differed too much from the conventional idioms of a programming language (but maybe I'm just awkward.)

I liked Objective-C since it was "stood on the shoulders" of an existing language and libraries: Smalltalk.

The C++ mob regarded it as a sign of manliness that they Knew Right and Better than everyone else. The consequence is that they didn't stand on other people's shoulders but repeated known mistakes; tripping on their own toenails. That is extremely visible in research papers: C++ papers only referred to other C++ papers, whereas papers for other languages typically referred to knowledge gained from several other languages.

C++ had the infamous mercenary attitude: kill them all and let God sort them out. How so? By throwing kitchen sinks into the language and letting the developers sort it out. There's an old saying: "by failing to choose, you choose to fail".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21077
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #56 on: October 07, 2024, 10:14:58 am »
it would be an interesting challenge to think up questions to an LLM which try to check whether it does its own inference and generalisation.
That sounds like the making of an echo chamber feedback loop.

Indeed this is what human conspiracy theorists do!

If "check and evaluate" worked then LLMs would bootstrap themselves towards a perfect nirvana. Anybody believe that can/will happen? To me it feels more like the halting problem.

There are already research paper indicating that LLM output "degrades" when fed output from other LLMs. Unsurprising really.

Forward thinkers are already worried about what will happen when more of the stuff o the web has been generated by LLMs.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline ebastler

  • Super Contributor
  • ***
  • Posts: 7345
  • Country: de
Re: Why offtopic is going to hurt us all...
« Reply #57 on: October 07, 2024, 10:46:29 am »
it would be an interesting challenge to think up questions to an LLM which try to check whether it does its own inference and generalisation.
That sounds like the making of an echo chamber feedback loop.

Indeed this is what human conspiracy theorists do!

 :o

You must have misunderstood my point entirely. Reaching learnings by inference and generalisation is exactly what all humans do. It's how small children arrive at the "foots" and "beet" words in your very example -- by figuring out an underlying rule how to form a plural form, based on other examples they learned.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Why offtopic is going to hurt us all...
« Reply #58 on: October 07, 2024, 12:41:07 pm »
While many people has no doubt uses variables named "E", "m", "c" in mathematical formulas, would an LLM ever arrange them as "E = mc2", along with justification, before the first human did that?

Not sure where this argument is going: clearly we developed -- well, give or take what meaning is attached to those variables, it could be many things, but we can take the most familiar case to us here, relativity -- well before LLMs.  It seems likely that any society would; while relativity is not a strict prerequisite for advanced computers, it seems wildly unlikely it won't be uncovered along the way.

Or if you mean, not strict chronologically, but just at a given time where both coexist [humans and LLMs, give or take the potential knowledge of relativity]; clearly the presence of one, in the training data of the other, makes it much more likely -- give or take coherence length of course.  But then ordering doesn't make sense...

Or perhaps, assuming a state of society where humans exist and relativity doesn't, but the conditions are ripe to discover it; versus a society in the same state, but somehow equipped with LLMs, trained on current scientific and mathematical progress and investigated as an oracle upon that; perhaps, as a matter of incremental change.  Structurally significant changes seem unlikely (like the choice of tensor notation in GR), but it could still bring insights and attention to certain overlooked combinations of ideas.

I don't know that an LLM can't produce semantic insights as such.  (Is there even a way to prove that?)  Perhaps a more complex design is required (not merely scaling things up), than what we're been trying so far.  Eventually, we will crack the problem of feedback poisoning the training, though whether we still call such a model "LLM" at that point, I have no idea.

Or, put another way: what structural differences exist, between biological neural networks and current-tech equivalents, that specifically implement or rule out (as the case may be) such a nature?

I suspect no one knows enough about either case, as yet, to make a reasonably confident and meaningful statement on this.  But I haven't been following either subject in detail, and I know there's been recent developments in e.g. human connectome.


The other absolutely key point is that you can ask a person why they chose their course of action. That is still - after 40 years - an LLM "active research topic". Translation: good people have repeatedly tried and failed.

You can ask a person that, but you may not get a truthful answer. Or even a meaningful one.

Even just asking for a process of reasoning, presupposes one existed; it's a loaded question!  A bias we need to be extremely careful about indeed, when working with systems that don't learn and "think" the way we do.

Often a much more useful line of questioning is: "How do you feel? What caused you to feel this way?"

(It will be, it seems, a few degrees higher complexity (in whatever structural or scaled way applies) before an LLM can generate reasonable reflections.  Perhaps a sort of self-awareness can arise out of feedback loops, without corrupting overall system state; the real challenge is training such a system without an internet-sized corpus of self-reflection!)

Most humans do happen to be at least conversant in syllogisms (if not strictly technically accurate ones), but even when so, people are often not aware of their own thought processes.  Emotions and hormones precipitate actions from stimuli, but fail to produce any such (strict) log of [rational justification] --> [action] --> [further justification] (etc.).  Or worse still, they do, and the reasons are completely irrelevant on closer inspection.

The subconscious doer / conscious observer model shows up from time to time, with particularly exaggerated discrepancies in pathological cases (perhaps, disorder or trauma breaking connections between parts of the brain), such that the consciousness confabulates its own explanation of actions without knowing how or why they were taken.  When these two aspects are working together, we perceive a convincingly coherent, cooperative whole (or, so we are conditioned to!), but it's equally possible that that's all we ever were: a pasted-together hack of very (mutually-)convincing behaviors that happens to do well enough, and -- importantly -- that evolution can turn enough knobs on, to optimize for survival under so and so many conditions.

It seems to me there are two loud camps on the LLM conversation: those that consistently deny that an LLM (as we know it currently) can surpass a human in (a) any or (b) all traits, or (c) that any model (current or foreseeable) ever can; and, those that are head-over-heels impressed with the successes, so much that they are willfully ignorant to the blemishes of current tech (but, point in case, that error is indeed decreasing as models improve, and it's yet to be seen where it ends).

The deniers will willfully refuse to accept the inevitable advance, until they are subsumed by it, obsoleted.  The enthusiasts play the gamble of riding the curve, hoping that those current blemishes are slight enough, or can be ignored to adequate (read: marketable) satisfaction by enough users, to point to it and say "see how cool this is?", without it crashing and burning in the near term -- or, to hope to make big bucks by investing in it.  Meanwhile, the enthusiasts may hope it keeps going up long term; but that's a devil's bargain, as their best long-term prospect is also to be obsoleted by it.

I would humbly suggest a centrist-flavored trilemma, which -- granted, as centrism often is, may reflect my abject ignorance on this topic -- but to say that both of these things can be true to varying degrees, and that there exists a far more terrifying insight that both camps have missed and should instead be focusing on:

Sooner or later, we will realize the total extent of the human psyche; we will map out the brain, "explain" consciousness, and -- as much by designing and probing these models, and how they reflect upon (analogize, or indeed even emulate) the behavior and nature of the meat kind, as well as by probing the meat kind directly.  A dark, Lovecraftian sort of horror, awaits us here: there will be no discovery of "soul", no deep spiritual insight, only the deep and forbidden knowledge of the self on a fundamental working level (if to a very loose level of abstraction, as a system cannot possibly understand itself *fully* from within itself, but at certain levels of understanding, or approximations thereto, sure).  Rather than the maddening infinite, we will be faced with the utter banality of algorithmic existence.  Will this drive some to solipsism?  Perhaps others will seek refuge in belief, denying until the end that such knowledge can even exist?  Will some simply make peace with existence as a finite and knowable being?  Perhaps there will there be a spiritual awakening, working outward rather than inward: having disproven metaphysics, we might redefine "spirit" or "soul", "consciousness", "free will", etc. into descriptive processes or perceptions, rather than the unfalsifiable belief systems people create around them today.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline magic

  • Super Contributor
  • ***
  • Posts: 7388
  • Country: pl
Re: Why offtopic is going to hurt us all...
« Reply #59 on: October 07, 2024, 01:13:58 pm »
The logic is simple.  There is no specific outside stimuli of "foots" or "beet", so these words were never learnt via replicating outside stimuli.  A different learning process must've been involved, one that LLMs are currently incapable of.

The fact that you don't see "foots" and "beet" in LLM output just means that (in this particular respect) the model has gone beyond the toddler stage of learning, and has already picked up the correct words.
Yep, right. It probably wouldn't be hard (likely: would be easier) to create a model which makes such mistakes.

As mentioned above, it would be an interesting challenge to think up questions to an LLM which try to check whether it does its own inference and generalisation. These would have to be about "fresh" topics which the LLM can't answer based on its factual training.The "foots" and "beet" observation is not suitable in that respect.
AI fanboys could start with trying to solve any of the open problems in mathematics or computation theory. And get an answer any better than a summary of current state of the art or an obviously hallucinated non-proof.

Bonus: see how easy it is to hallucinate a wrong proof vs a correct one. This could be fun.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21077
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #60 on: October 07, 2024, 01:17:03 pm »
The other absolutely key point is that you can ask a person why they chose their course of action. That is still - after 40 years - an LLM "active research topic". Translation: good people have repeatedly tried and failed.

You can ask a person that, but you may not get a truthful answer. Or even a meaningful one.

Of course. But people treat computers differently.

I guess you are lucky and have you been on the wrong end of something equivalent to "the computer says so and the computer is infallible/unbiassed. Therefore go forth and multiply"? I hope your luck continues.

That would never happen with conventional search results. It does happen with LLMs and other computer systems that hide their reasons for the results, often invoking commercial confidentiality.

Please read comp.risks, a high quality moderated mailing list which has been going for almost 40 years. The contributors are giants in the fields of reliability and how things and systems fail. Searchable archive at http://catless.ncl.ac.uk/Risks/



Quote
It seems to me there are two loud camps on the LLM conversation: those that consistently deny that an LLM (as we know it currently) can surpass a human in (a) any or (b) all traits, or (c) that any model (current or foreseeable) ever can; and, those that are head-over-heels impressed with the successes, so much that they are willfully ignorant to the blemishes of current tech (but, point in case, that error is indeed decreasing as models improve, and it's yet to be seen where it ends).

The deniers will willfully refuse to accept the inevitable advance, until they are subsumed by it, obsoleted.  The enthusiasts play the gamble of riding the curve, hoping that those current blemishes are slight enough, or can be ignored to adequate (read: marketable) satisfaction by enough users, to point to it and say "see how cool this is?", without it crashing and burning in the near term -- or, to hope to make big bucks by investing in it.  Meanwhile, the enthusiasts may hope it keeps going up long term; but that's a devil's bargain, as their best long-term prospect is also to be obsoleted by it.

I would humbly suggest a centrist-flavored trilemma, which -- granted, as centrism often is, may reflect my abject ignorance on this topic -- but to say that both of these things can be true to varying degrees, ...

I agree, and dislike both extremes. I admit I distrust the technology shills and acolytes more, because the consequences of their statements are likely to be more deleterious.

Nonetheless, it is possible to understand the general principles behind LLM's operation, and then to use that understanding to enquire about inherent failure modes. Same is true for other technology/politics/etc.

N.B. OpenAI specifically forbid users to try to discover their "models", which can be interpreted as trying to figure out the reasoning and the envelope limits of the LLM's output. I don't think that has been tested in court, and I would expect manufacturers to try to avoid having it tested in court.


Quote
...and that there exists a far more terrifying insight that both camps have missed and should instead be focusing on:

Sooner or later, we will realize the total extent of the human psyche; we will map out the brain, "explain" consciousness, and -- as much by designing and probing these models, and how they reflect upon (analogize, or indeed even emulate) the behavior and nature of the meat kind, as well as by probing the meat kind directly.  A dark, Lovecraftian sort of horror, awaits us here: there will be no discovery of "soul", no deep spiritual insight, only the deep and forbidden knowledge of the self on a fundamental working level (if to a very loose level of abstraction, as a system cannot possibly understand itself *fully* from within itself, but at certain levels of understanding, or approximations thereto, sure).  Rather than the maddening infinite, we will be faced with the utter banality of algorithmic existence.  Will this drive some to solipsism?  Perhaps others will seek refuge in belief, denying until the end that such knowledge can even exist?  Will some simply make peace with existence as a finite and knowable being?  Perhaps there will there be a spiritual awakening, working outward rather than inward: having disproven metaphysics, we might redefine "spirit" or "soul", "consciousness", "free will", etc. into descriptive processes or perceptions, rather than the unfalsifiable belief systems people create around them today.

Philosophy and religion often intermingle with hopes and fears. Sometimes the resulting emissions are entertaining, but they are rarely useful.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 38988
  • Country: au
    • EEVblog
Re: Why offtopic is going to hurt us all...
« Reply #61 on: October 07, 2024, 01:50:38 pm »
Indeed, Google as the search-engine-to-go option is rapidly diminishing.

I pretty much now only use google search if I'm after a link to a specific thing.
If I have an actual question I'm looking for an answer for I go straight to ChatGPT.
Gone are the days of trying to ask google a question and hoping you get some resource page that might answer it.
 

Offline tggzzz

  • Super Contributor
  • ***
  • Posts: 21077
  • Country: gb
  • Numbers, not adjectives
    • Having fun doing more, with less
Re: Why offtopic is going to hurt us all...
« Reply #62 on: October 07, 2024, 01:55:37 pm »
Indeed, Google as the search-engine-to-go option is rapidly diminishing.

I pretty much now only use google search if I'm after a link to a specific thing.
If I have an actual question I'm looking for an answer for I go straight to ChatGPT.
Gone are the days of trying to ask google a question and hoping you get some resource page that might answer it.

There's too much auto-generated crap around (not even LLM, yet) plus search engine optimisation.

The former is a harbinger, the latter will mutate into LLMOptimisation.

(7s long; if only all yootoob videos were as terse)
« Last Edit: October 07, 2024, 01:58:41 pm by tggzzz »
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4688
  • Country: nz
Re: Why offtopic is going to hurt us all...
« Reply #63 on: October 07, 2024, 02:24:01 pm »
They don't like it up 'em.

One of my last memories of my maternal grandfather, who died when I was quite young, was him taking me to the 1971 Dad's Army movie, shown in the hall of the village near his farm.
 

Offline bte

  • Contributor
  • Posts: 21
  • Country: tr
Re: Why offtopic is going to hurt us all...
« Reply #64 on: October 29, 2024, 08:20:51 pm »
You ask it what's the second biggest capital in Europe, and it tells you that it's Istanbul.

Isn't that correct, or at least arguable? It's complicated.


Considering that Istanbul is not (and was ever not) the capital of Republic of Turkey, I would say it is incorrect and not arguable. I wonder about the sources ChatGPT got trained on so that it can consider Istanbul as a capital city.
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15683
  • Country: fr
Re: Why offtopic is going to hurt us all...
« Reply #65 on: October 29, 2024, 08:26:29 pm »
It's degenerative AI.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4688
  • Country: nz
Re: Why offtopic is going to hurt us all...
« Reply #66 on: October 29, 2024, 08:30:22 pm »
You ask it what's the second biggest capital in Europe, and it tells you that it's Istanbul.

Isn't that correct, or at least arguable? It's complicated.


Considering that Istanbul is not (and was ever not) the capital of Republic of Turkey, I would say it is incorrect and not arguable. I wonder about the sources ChatGPT got trained on so that it can consider Istanbul as a capital city.

Oh! I'd never considered that the problem might lie in "capital" not "in Europe" or "biggest".
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 38988
  • Country: au
    • EEVblog
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf