General > General Technical Chat

Scientific publishing

<< < (3/6) > >>

CatalinaWOW:

--- Quote from: Nominal Animal on October 25, 2023, 05:15:29 pm ---Just like the proof of the pudding is in the eating, proper academic research is equal amounts of hard work towards any discovery, and communicating such to others in an useful form.

If you start using babblegenerators for one, why not use it for the other as well?

The thing about language models is that if you use them to refine your output because you cannot do so yourself, you then also lack the skill to analyse and verify the generated output (even in the remote chance that you would bother even trying).

The inevitable end result is to increase academic output while decreasing the reliability and value of the content. More shit.
It is also the exact opposite of using the scientific method, leaving half of the work to a black-box language model to splooge out.
This is not what current academia needs, considering the already low-quality work proliferating (as measured by retractions and later findings of method failures or improper data selection or filtering, and most importantly, the impossibility of replicating most results).

I did not use the analog of a gun instructor shooting their own foot as hyperbole, but because it really is closely analogous to the field in general.
As mentioned by others, we already have practices in place and commonly accepted that inevitably reduces the quality of articles (because those articles had to be written to keep their authors funded, regardless whether the content has merit on its own or not), and widespread "gaming" of the system by creating clusters of articles that refer to each other with very few or none external references and citations, just so that using the current metrics, the authors appear successful and valued in their field.  As there is no organized effort to curb any of it, in my analog obtaining guns has already been made easy, legal, and cheap.  All we need now is random fire hurting everyone nearby, from large language model enabled enthusiasts with minimal to no patience or skill for actual scientific work.  How many people are willing to work in such an environment?  Not the most intelligent ones for sure, I can guarantee that.

:rant:

--- End quote ---

The dangers of AI are real.  But there is a danger in your idealized path also.  I have known several individuals who were brilliant in their fields.  And who had poor communication skills and little interest in diverting time from his fields of interest to improve those skills.  You seem to be suggesting that such individuals do not merit publication and by inference their work is not valuable. 

My mentor in my first job was such a person, and in one sense my job was to translate for him.  One response would be to say that such people should have collaborators, with two or more individuals making up one "fully competent" team.  It still remains for the savant to verify that they collaborators have correctly interpreted and reported the work.  But if this is acceptable, how is that different than using an AI to aid in the documentation function?

All of the complaints I have seen in this thread are really not aimed at AI, but at the lack of diligence in its use.  But laziness and dishonesty are factors that are not unique to AI.  Perhaps the real problem is that AI simplifies production of things which on the surface appear robust and legitimate, and that it takes too much work on the part of the reader to determine value.   This same concern would apply to using search engines to find citation counts for articles and using that as a measure of some works value, as opposed to actually reading and evaluating the original citation and the articles citing it.  Or using meta-analysis providing percentage or counts of papers on a particular side of an issue as evidence for the validity of one or the other position.

Nominal Animal:

--- Quote from: CatalinaWOW on October 25, 2023, 06:13:21 pm ---You seem to be suggesting that such individuals do not merit publication and by inference their work is not valuable.
--- End quote ---
No, I am saying that by using LLMs they will fail.

The proper approach is, like you in your first job, is to have someone intelligent assist in the task.  It is not something anybody off the street can do, they require domain knowledge, and have a compatible personality to the individual so they can effectively discuss exactly what should be communicated.
I understand this very well, because in many projects I was a similar translator across non-overlapping domains of expertise, successfully, and highly value when someone else does the same for me.

In fact, I am willing to bet that your own domain understanding grew by leaps and bounds at that time –– or that you failed, and were replaced with someone else better suited for the task.  Could LLMs have done what you did, at the same quality?  I do not think so for a second.


--- Quote from: CatalinaWOW on October 25, 2023, 06:13:21 pm ---It still remains for the savant to verify that they collaborators have correctly interpreted and reported the work.  But if this is acceptable, how is that different than using an AI to aid in the documentation function?
--- End quote ---
When we actually develop AI, we can discuss that.  For now, as you well know, what we have are statistical large language models with zero understanding of the content.  They are babble generators, nothing more.  Ones with very detailed and extensive statistical relationships between terms and expressions, sure, so they can appear to be intelligent because they draw from existing intelligent writing; but they're nowhere near "intelligence" in any of its definitions.

The case where a person is capable of understanding the text, but incapable of producing such text themselves, are exceedingly rare.  We're talking John Nash rare.

(If you permit me going back to the gun analogy, of course there are those who use and need guns.  I am not banning guns, nor am I banning LLM use.  I am describing the scenario where their use becomes ubiquitous, because it is cheaper and easier than the alternatives; and especially cheaper and easier than doing it yourself even when you are able to if you were to spend sufficient effort to learn to do so, applying to almost all scientists.)


--- Quote from: CatalinaWOW on October 25, 2023, 06:13:21 pm ---All of the complaints I have seen in this thread are really not aimed at AI, but at the lack of diligence in its use.
--- End quote ---
You can make the same argument about medical control, and the use of narcotics.  Or gun control.  Or, really, any human behaviour where risks are low and rewards high.

The "let's just tell people to be more diligent" argument does not fly at all.  It has not flown in any other context without real enforcement, so why would it work for science?  Do you really mean you believe scientists are better (as in more ethical, more moral, more diligent) than the average person?  I do not believe that for a second.

jpanhalt:
@CatalinaWOW

I think we agree more than disagree. Yes, AI presents threats, but my main concern is using its failures as excuses.

Robert Good, in my opinion had no legitimate excuse.  The off-duty airline pilot who recently tried to turn off the engines of a regional flight now blames "magic mushrooms" (psilocybin?) for his actions.  That is not an acceptable excuse.  On his own free will, he used a psychedelic drug and needs to be held to the same level of accountability as if he hadn't.  Unfortunately, our system (USA) seems to give weight to such excuses.

thm_w:

--- Quote from: jpanhalt on October 25, 2023, 02:56:00 pm ---
--- Quote from: thm_w on October 23, 2023, 10:11:27 pm ---"He finds it particularly useful for suggesting clearer ways to convey his ideas."
Why would anyone think that means putting in false citations into the paper.

Next time just post here: https://www.eevblog.com/forum/chatgptai/

--- End quote ---

Why did an attorney to that?  Ours is not to question why.  It has happened and will happen. 

"Academic honesty" is often a myth and facts to support that have already been presented.  Do you consider what Dr. Good in the Summerlin matter was "honest?"

--- End quote ---

A lawyer did it, and the false citations were easily seen, resulting in a $5,000 fine and loss of the case. Not sure why this is suddenly the end of the world for scientific publishing.

For highschool homework? Sure, it has a big effect.


--- Quote from: Dan123456 on October 25, 2023, 01:51:55 pm ---For me the big issue is that a lot of academics must release a certain number of papers over a set period under their employment contracts. This is a big part (at least in my mind) as to why so many papers are already just utter garbage.

Normalising the use of tools like ChatGPT in papers from people who are stressed out / crunching while forced to write non-original, boring crap just to keep their jobs (or worse, the lazy ones who don’t even try at all) almost guarantees some of them are going to use it inappropriately!
--- End quote ---

Most will use it appropriately, some will not. If they are writing garbage it will end up in some low-tier journal, as is already the case.
As you say, the main problems lie elsewhere (publish or perish + publication bias).

CatalinaWOW:
True AI?  I am not sure we can even define what intelligence is.

Calling the large memory models Babel generators overstates the case.  They generate a lot of tripe.  I haven't reviewed a large enough sample to even guess at what percentage that is, but I would agree that it is hugely weighted towards the tripe end of the scale.  But that is many orders of magnitude better than the classic building full of monkeys with typewriters.

I would agree that the people who are truly superior at something and truly cannot communicate are rare.  But there is a broad spectrum of this capability just as in any other area of human performance.  I would also suggest that the ability to effectively use current AI approaches varies widely.  For example, Hawking seems an obvious choice of someone who could benefit with help in the mechanics of communication.  But it isn't obvious to me that curating a ChatGPT output would be easier than his tedious interaction with a keyboard.  Those who might benefit the most are in the intermediate category.

I readily agree that there will always be people who misuse any technology.  But also am not aware of any successful bans where the ban was not aligned with the perceived self interest of the vast majority of those affected.  Which is a large part of why gun control sort of works in Europe and hasn't been successful in the US. And also aligns with the argument that the publication system is broken because publication rather than quality is in the self interest of a large fraction of the community.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod