Author Topic: Scientific publishing  (Read 3246 times)

0 Members and 2 Guests are viewing this topic.

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 15800
  • Country: fr
Scientific publishing
« on: October 21, 2023, 07:21:41 pm »
The state of scientific publishing is currently not bad enough, we're heading for even better:
https://www.nature.com/articles/d41586-023-03144-w
 
The following users thanked this post: pdenisowski, Nominal Animal, Dan123456

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7198
  • Country: fi
    • My home page and email address
Re: Scientific publishing
« Reply #1 on: October 22, 2023, 06:35:33 am »
:palm:

This is like watching a gun instructor shoot themselves in the foot.
 
The following users thanked this post: Dan123456

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 15800
  • Country: fr
Re: Scientific publishing
« Reply #2 on: October 22, 2023, 07:48:37 pm »
:palm:

This is like watching a gun instructor shoot themselves in the foot.

Yeah, pretty much so.
 

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 4005
  • Country: us
Re: Scientific publishing
« Reply #3 on: October 22, 2023, 08:46:40 pm »
Well, it is the University of Washington in Seattle, WA.  A large area of that state wants to secede and join the adjacent state of Idaho.  There is precedent for that when West Virginia was split from Virginia, but times were very different then.
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5571
  • Country: us
Re: Scientific publishing
« Reply #4 on: October 22, 2023, 11:55:45 pm »
It seems that all they are doing is using ChatGPT as a super spell and grammar checker.  Nora horrible idea.  But spell and grammar checkers create their own havoc if not carefully monitored.  The more powerful ChatGPT will require equally close monitoring.  Something we humans aren't particularly good at.
 
The following users thanked this post: thm_w

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 4005
  • Country: us
Re: Scientific publishing
« Reply #5 on: October 23, 2023, 09:13:25 am »
That's not how I interpreted his comments.  He is using it to help write.  If that includes, for example, finding citations on point, and he uses them without checking, then he has crossed a line and committed academic fraud.  That is not unlike the recent instance of the lawyers who cited non-existent cases to support their arguments.

That is just one example.  It is easy to imagine other scenarios where "helping" him get over writer's block effectively creates an unnamed and unaccountable coauthor.  Unscrupulous individuals will commit scientific fraud, and one should not blame the tools they use to do that.  ChatGPT is one such tool.  However, in my experience, academia has failed to adequately hold such individuals accountable.   If an underling does it, they they are dismissed, not blacklisted. 

If a famous scientist is involved (like Robert Good and Robert Gallo (HIV)) it's a scandal, the underling is blamed, and leader of the fraud moves on, maybe even getting a Nobel Prize.  In the future, such scientists may simply blame ChatGPT as it cannot defend itself.

Wikipedia has a list of such frauds, and the undeling in the Robert Good case is mentioned (Summerlin), but not Robert Good.  Here are some links that may be of interest:

https://en.wikipedia.org/wiki/List_of_scientific_misconduct_incidents
https://www.nytimes.com/1974/05/25/archives/article-5-no-title-fraud-is-charged-at-cancer-center-premature.html (easier to access)

Robert Good (https://en.wikipedia.org/wiki/Robert_A._Good )
Robert Gallo (https://en.wikipedia.org/wiki/Robert_Gallo)
 
The following users thanked this post: Stray Electron, RJSV

Offline thm_w

  • Super Contributor
  • ***
  • Posts: 7527
  • Country: ca
  • Non-expert
Re: Scientific publishing
« Reply #6 on: October 23, 2023, 10:11:27 pm »
"He finds it particularly useful for suggesting clearer ways to convey his ideas."
Why would anyone think that means putting in false citations into the paper.

Next time just post here: https://www.eevblog.com/forum/chatgptai/
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 
The following users thanked this post: ebastler

Offline Dan123456

  • Regular Contributor
  • *
  • Posts: 199
  • Country: au
Re: Scientific publishing
« Reply #7 on: October 25, 2023, 01:51:55 pm »
"He finds it particularly useful for suggesting clearer ways to convey his ideas."
Why would anyone think that means putting in false citations into the paper.

Next time just post here: https://www.eevblog.com/forum/chatgptai/

For me the big issue is that a lot of academics must release a certain number of papers over a set period under their employment contracts. This is a big part (at least in my mind) as to why so many papers are already just utter garbage.

Normalising the use of tools like ChatGPT in papers from people who are stressed out / crunching while forced to write non-original, boring crap just to keep their jobs (or worse, the lazy ones who don’t even try at all) almost guarantees some of them are going to use it inappropriately!

I also reckon that someone asking ChatGPT to “expand their dataset” or what not probably feels less naughty compacted to flat out falsifying data and imagine there is probably a psychological component making it mentally easier for people to justify it to themselves too (that might be a paper idea there for anyone who wants it. You don’t even have to cite me :P).

The even bigger issue comes in that you have other people that will have to waste their time peer reviewing and discrediting that crap!

Don’t get me wrong, I’m not blaming the academics, I think it’s the system that is busted and we should be asking them for quality papers, rather than quantity of papers.
 

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 4005
  • Country: us
Re: Scientific publishing
« Reply #8 on: October 25, 2023, 02:56:00 pm »
"He finds it particularly useful for suggesting clearer ways to convey his ideas."
Why would anyone think that means putting in false citations into the paper.

Next time just post here: https://www.eevblog.com/forum/chatgptai/

Why did an attorney to that?  Ours is not to question why.  It has happened and will happen. 

"Academic honesty" is often a myth and facts to support that have already been presented.  Do you consider what Dr. Good in the Summerlin matter was "honest?"
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7198
  • Country: fi
    • My home page and email address
Re: Scientific publishing
« Reply #9 on: October 25, 2023, 05:15:29 pm »
Just like the proof of the pudding is in the eating, proper academic research is equal amounts of hard work towards any discovery, and communicating such to others in an useful form.

If you start using babblegenerators for one, why not use it for the other as well?

The thing about language models is that if you use them to refine your output because you cannot do so yourself, you then also lack the skill to analyse and verify the generated output (even in the remote chance that you would bother even trying).

The inevitable end result is to increase academic output while decreasing the reliability and value of the content. More shit.
It is also the exact opposite of using the scientific method, leaving half of the work to a black-box language model to splooge out.
This is not what current academia needs, considering the already low-quality work proliferating (as measured by retractions and later findings of method failures or improper data selection or filtering, and most importantly, the impossibility of replicating most results).

I did not use the analog of a gun instructor shooting their own foot as hyperbole, but because it really is closely analogous to the field in general.
As mentioned by others, we already have practices in place and commonly accepted that inevitably reduces the quality of articles (because those articles had to be written to keep their authors funded, regardless whether the content has merit on its own or not), and widespread "gaming" of the system by creating clusters of articles that refer to each other with very few or none external references and citations, just so that using the current metrics, the authors appear successful and valued in their field.  As there is no organized effort to curb any of it, in my analog obtaining guns has already been made easy, legal, and cheap.  All we need now is random fire hurting everyone nearby, from large language model enabled enthusiasts with minimal to no patience or skill for actual scientific work.  How many people are willing to work in such an environment?  Not the most intelligent ones for sure, I can guarantee that.

:rant:
« Last Edit: October 25, 2023, 05:17:01 pm by Nominal Animal »
 
The following users thanked this post: Siwastaja, SiliconWizard

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5571
  • Country: us
Re: Scientific publishing
« Reply #10 on: October 25, 2023, 06:13:21 pm »
Just like the proof of the pudding is in the eating, proper academic research is equal amounts of hard work towards any discovery, and communicating such to others in an useful form.

If you start using babblegenerators for one, why not use it for the other as well?

The thing about language models is that if you use them to refine your output because you cannot do so yourself, you then also lack the skill to analyse and verify the generated output (even in the remote chance that you would bother even trying).

The inevitable end result is to increase academic output while decreasing the reliability and value of the content. More shit.
It is also the exact opposite of using the scientific method, leaving half of the work to a black-box language model to splooge out.
This is not what current academia needs, considering the already low-quality work proliferating (as measured by retractions and later findings of method failures or improper data selection or filtering, and most importantly, the impossibility of replicating most results).

I did not use the analog of a gun instructor shooting their own foot as hyperbole, but because it really is closely analogous to the field in general.
As mentioned by others, we already have practices in place and commonly accepted that inevitably reduces the quality of articles (because those articles had to be written to keep their authors funded, regardless whether the content has merit on its own or not), and widespread "gaming" of the system by creating clusters of articles that refer to each other with very few or none external references and citations, just so that using the current metrics, the authors appear successful and valued in their field.  As there is no organized effort to curb any of it, in my analog obtaining guns has already been made easy, legal, and cheap.  All we need now is random fire hurting everyone nearby, from large language model enabled enthusiasts with minimal to no patience or skill for actual scientific work.  How many people are willing to work in such an environment?  Not the most intelligent ones for sure, I can guarantee that.

:rant:

The dangers of AI are real.  But there is a danger in your idealized path also.  I have known several individuals who were brilliant in their fields.  And who had poor communication skills and little interest in diverting time from his fields of interest to improve those skills.  You seem to be suggesting that such individuals do not merit publication and by inference their work is not valuable. 

My mentor in my first job was such a person, and in one sense my job was to translate for him.  One response would be to say that such people should have collaborators, with two or more individuals making up one "fully competent" team.  It still remains for the savant to verify that they collaborators have correctly interpreted and reported the work.  But if this is acceptable, how is that different than using an AI to aid in the documentation function?

All of the complaints I have seen in this thread are really not aimed at AI, but at the lack of diligence in its use.  But laziness and dishonesty are factors that are not unique to AI.  Perhaps the real problem is that AI simplifies production of things which on the surface appear robust and legitimate, and that it takes too much work on the part of the reader to determine value.   This same concern would apply to using search engines to find citation counts for articles and using that as a measure of some works value, as opposed to actually reading and evaluating the original citation and the articles citing it.  Or using meta-analysis providing percentage or counts of papers on a particular side of an issue as evidence for the validity of one or the other position.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7198
  • Country: fi
    • My home page and email address
Re: Scientific publishing
« Reply #11 on: October 25, 2023, 06:58:50 pm »
You seem to be suggesting that such individuals do not merit publication and by inference their work is not valuable.
No, I am saying that by using LLMs they will fail.

The proper approach is, like you in your first job, is to have someone intelligent assist in the task.  It is not something anybody off the street can do, they require domain knowledge, and have a compatible personality to the individual so they can effectively discuss exactly what should be communicated.
I understand this very well, because in many projects I was a similar translator across non-overlapping domains of expertise, successfully, and highly value when someone else does the same for me.

In fact, I am willing to bet that your own domain understanding grew by leaps and bounds at that time –– or that you failed, and were replaced with someone else better suited for the task.  Could LLMs have done what you did, at the same quality?  I do not think so for a second.

It still remains for the savant to verify that they collaborators have correctly interpreted and reported the work.  But if this is acceptable, how is that different than using an AI to aid in the documentation function?
When we actually develop AI, we can discuss that.  For now, as you well know, what we have are statistical large language models with zero understanding of the content.  They are babble generators, nothing more.  Ones with very detailed and extensive statistical relationships between terms and expressions, sure, so they can appear to be intelligent because they draw from existing intelligent writing; but they're nowhere near "intelligence" in any of its definitions.

The case where a person is capable of understanding the text, but incapable of producing such text themselves, are exceedingly rare.  We're talking John Nash rare.

(If you permit me going back to the gun analogy, of course there are those who use and need guns.  I am not banning guns, nor am I banning LLM use.  I am describing the scenario where their use becomes ubiquitous, because it is cheaper and easier than the alternatives; and especially cheaper and easier than doing it yourself even when you are able to if you were to spend sufficient effort to learn to do so, applying to almost all scientists.)

All of the complaints I have seen in this thread are really not aimed at AI, but at the lack of diligence in its use.
You can make the same argument about medical control, and the use of narcotics.  Or gun control.  Or, really, any human behaviour where risks are low and rewards high.

The "let's just tell people to be more diligent" argument does not fly at all.  It has not flown in any other context without real enforcement, so why would it work for science?  Do you really mean you believe scientists are better (as in more ethical, more moral, more diligent) than the average person?  I do not believe that for a second.
 

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 4005
  • Country: us
Re: Scientific publishing
« Reply #12 on: October 25, 2023, 06:59:35 pm »
@CatalinaWOW

I think we agree more than disagree. Yes, AI presents threats, but my main concern is using its failures as excuses.

Robert Good, in my opinion had no legitimate excuse.  The off-duty airline pilot who recently tried to turn off the engines of a regional flight now blames "magic mushrooms" (psilocybin?) for his actions.  That is not an acceptable excuse.  On his own free will, he used a psychedelic drug and needs to be held to the same level of accountability as if he hadn't.  Unfortunately, our system (USA) seems to give weight to such excuses.
 

Offline thm_w

  • Super Contributor
  • ***
  • Posts: 7527
  • Country: ca
  • Non-expert
Re: Scientific publishing
« Reply #13 on: October 25, 2023, 09:46:29 pm »
"He finds it particularly useful for suggesting clearer ways to convey his ideas."
Why would anyone think that means putting in false citations into the paper.

Next time just post here: https://www.eevblog.com/forum/chatgptai/

Why did an attorney to that?  Ours is not to question why.  It has happened and will happen. 

"Academic honesty" is often a myth and facts to support that have already been presented.  Do you consider what Dr. Good in the Summerlin matter was "honest?"

A lawyer did it, and the false citations were easily seen, resulting in a $5,000 fine and loss of the case. Not sure why this is suddenly the end of the world for scientific publishing.

For highschool homework? Sure, it has a big effect.

For me the big issue is that a lot of academics must release a certain number of papers over a set period under their employment contracts. This is a big part (at least in my mind) as to why so many papers are already just utter garbage.

Normalising the use of tools like ChatGPT in papers from people who are stressed out / crunching while forced to write non-original, boring crap just to keep their jobs (or worse, the lazy ones who don’t even try at all) almost guarantees some of them are going to use it inappropriately!

Most will use it appropriately, some will not. If they are writing garbage it will end up in some low-tier journal, as is already the case.
As you say, the main problems lie elsewhere (publish or perish + publication bias).
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Offline CatalinaWOW

  • Super Contributor
  • ***
  • Posts: 5571
  • Country: us
Re: Scientific publishing
« Reply #14 on: October 25, 2023, 11:09:41 pm »
True AI?  I am not sure we can even define what intelligence is.

Calling the large memory models Babel generators overstates the case.  They generate a lot of tripe.  I haven't reviewed a large enough sample to even guess at what percentage that is, but I would agree that it is hugely weighted towards the tripe end of the scale.  But that is many orders of magnitude better than the classic building full of monkeys with typewriters.

I would agree that the people who are truly superior at something and truly cannot communicate are rare.  But there is a broad spectrum of this capability just as in any other area of human performance.  I would also suggest that the ability to effectively use current AI approaches varies widely.  For example, Hawking seems an obvious choice of someone who could benefit with help in the mechanics of communication.  But it isn't obvious to me that curating a ChatGPT output would be easier than his tedious interaction with a keyboard.  Those who might benefit the most are in the intermediate category.

I readily agree that there will always be people who misuse any technology.  But also am not aware of any successful bans where the ban was not aligned with the perceived self interest of the vast majority of those affected.  Which is a large part of why gun control sort of works in Europe and hasn't been successful in the US. And also aligns with the argument that the publication system is broken because publication rather than quality is in the self interest of a large fraction of the community.
 

Offline thm_w

  • Super Contributor
  • ***
  • Posts: 7527
  • Country: ca
  • Non-expert
Re: Scientific publishing
« Reply #15 on: October 25, 2023, 11:31:01 pm »
I would agree that the people who are truly superior at something and truly cannot communicate are rare.

Add to that people who can't perfectly communicate in their non-native language of english.
75-90% of papers are published in english. You are a lot less likely to be cited in a non-english language paper. Wanting to spend your time on something other than learning english seems like it should be an allowable choice to make. Some people are terrible at learning a language but amazing scientists.
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7198
  • Country: fi
    • My home page and email address
Re: Scientific publishing
« Reply #16 on: October 26, 2023, 11:34:58 am »
the publication system is broken because publication rather than quality is in the self interest of a large fraction of the community.
Yes: I also believe that is the largest contributing factor.

True AI?  I am not sure we can even define what intelligence is.
The g-factor as used in psychometrics, measured by the statistical ability to solve problems one has not encountered before, is a useful definition.
I tend to default to that one, until a more useful definition happens to crop up.

Calling the large memory models Babel babble generators overstates the case.
I disagree, obviously, but here is the reason: the models construct sentences based on a massive set of internally weighed statistical relationships.  (Look up machine learning transformer model for a better description of that.)

LLMs do not consider the content of any word –– technically speaking, the exact difference between the token at hand and its nearest neighbours.  At best, you can claim they roughly model the relative magnitude of such differences.

One definition of to utter meaningless words is to babble.  I am using that as the technical exact definition for the output, since the LLMs cannot know the meaning of the individual words, only their relationships to each other.  The latter is also why the output seems intelligent, but is not.  (A comparison to a very powerful search engine over its source material using fuzzy matching is also apt.)

I do recognize that the human language acquisition process also starts with a similar charting of relationships.  However, by interacting with others (especially face-to-face, so that body language and microexpressions will affect our own understanding of each term) we refine the meaning the word has for ourselves.  Similarly, sentence construction, word order, and so on are built in interaction, with each interaction adding meaning (based on the difference in original intent versus reaction observed) on top of those associations.

I would agree that the people who are truly superior at something and truly cannot communicate are rare.  But there is a broad spectrum of this capability just as in any other area of human performance.
Sure: just look at my own output.  I often fail English.  Like LLMs, my output is verbose and typically well-structured, yet I still fail, because of lack of face-to-face use (which leads to the lack of direct feedback bypassing my conscious mind to my language understanding), and failure to predict how specific terms and sentences are understood/perceived/assigned meaning by others.

I do believe that if we develop LLMs into a tool that can track its sources, and use models generated from controlled datasets, we can build tools that would help a lot with especially scientific communication.

In simple terms, that corresponds to creating LLMs that can translate jokes and anecdotes, perhaps even poems, across languages while still tracking the reasons for its choices (as, for example, references to the strongest source materials affecting its choice).

To continue my gun analog, those would correspond to bolt guns, dart guns with a variety of medical substances available, guns designed for shooting blanks at short range safely (for use in entertainment), and so on.  We just are not there yet.  Nobody seems even remotely interested in developing such, in fact.  Instead, LLMs are used as if they were already 'there', with the end result that they only make it easier for those who do not have anything meaningful to say to couch that non-message in attractive outer shape.  Thus, my opinion that they just cause more shit to be generated.

(It is interesting to compare LLM proponents' assertions and beliefs to those of explosive and weapon inventors.  Belief that sufficiently efficient killing machines would prohibit wars and save lives, due to the excessive cost in human lives, has been common.  But perhaps this comparison is too 'angry', and something like adding tetraethyl lead to gasoline to aid gasoline engine efficiency, would be more apt.  Or perhaps that too is 'too negative' for the LLM proponents.)

One of my hobbies is looking for science fiction stories with interesting storylines.  I'm not that interested in the characters per se, I'm mostly interested in the events depicted.  Many aspiring authors are now using LLMs to "flesh out their ideas", and the output (from my point of view) is so crap and waste of my time, that I've started to avoid looking at the output of new authors altogether.  Granted, perhaps my view of LLM use is overly negatively colored because of this, but having an academic background myself, I do not see scientific authors behaving any different.
« Last Edit: October 26, 2023, 11:37:29 am by Nominal Animal »
 

Offline jpanhalt

  • Super Contributor
  • ***
  • Posts: 4005
  • Country: us
Re: Scientific publishing
« Reply #17 on: October 26, 2023, 01:25:07 pm »
I would agree that the people who are truly superior at something and truly cannot communicate are rare.

Add to that people who can't perfectly communicate in their non-native language of english.
75-90% of papers are published in english. You are a lot less likely to be cited in a non-english language paper. Wanting to spend your time on something other than learning english seems like it should be an allowable choice to make. Some people are terrible at learning a language but amazing scientists.

If one considers scientific writing per se, it can be divided into three efforts, excluding such things as copy editing, addressing reviewers comments and so forth.  Those three efforts might be called literature review & citation, composition, and language translation.

I am concerned about the composition part, be it a ghost writer or AI.  Computer translation is not an issue, and computer aided literature review has been around for more than 50 years.  Ghost writers are generally revealed in the paper, either under the authors' names or in acknowledgement.  And in my view, regardless of how the text is created, those whose names appear as authors need to be held fully accountable with no excuses.  That was not the case in the Robert Good instance nor in any other scandal of which I am aware.

 
 

Offline hans

  • Super Contributor
  • ***
  • Posts: 1698
  • Country: nl
Re: Scientific publishing
« Reply #18 on: October 26, 2023, 04:46:56 pm »
Most of my paper "reading" doesn't even include reading text. I may read the abstract and introduction/conclusion. But most of the searching and filtering is done by browsing figures and only if something looks interesting, I will look at the text for details. I suppose its similar when browsing datasheets. Figures.. tables.. thats where the content is at.

In all papers I wrote, the journey starts with selecting material and getting the math & figures nicely laid out. We read papers from figures.. we don't write novels, and so I don't think an AI will offer much assistance in the actual content side of a paper. 

Of course there still needs to be some text, and you don't need AI to have hot takes. It happens all the time.. but lets not forget that papers are not books, and I think its hazardous to treat them as 100% factual. The qualitative reasoning, especially in introduction/conclusion, can be quite opinionated. Sometimes I find it even hilarious.

Personally I do think there there is some place for language tools in writing, but I don't think its AI. I don't want an AI that helps me with signposts and jargon; thats going to end terribly. Grammar, spelling, is perhaps more useful.. but those tools already exists, and we still have papers full of mistakes (although not every mistake makes them invaluable). So if any, I agree, its not going to change for the better with AI.. but I don't think it will end in disaster neither.
 
The following users thanked this post: thm_w, Dan123456

Offline Infraviolet

  • Super Contributor
  • ***
  • Posts: 1185
  • Country: gb
Re: Scientific publishing
« Reply #19 on: October 27, 2023, 04:07:04 pm »
So, by having an LLM "assist"in writing paper they can now ever more closely resemble existing papers, which are already mostly so badly written as to be unintelligible (too much focus on compressing meaning in to short word counts but with long sentences, never really explaining anything properly, citing loads of prior work but not really making any summary of it or saying what aspect of it is relevant...).

If academic writing is to be improved it needs more authors stepping away from the abysmal conventions it has gotten stuck in, not letting an AI entrench those conventions further.
 
The following users thanked this post: Siwastaja, pdenisowski, Nominal Animal, Dan123456

Offline pdenisowski

  • Frequent Contributor
  • **
  • Posts: 930
  • Country: us
  • Product Management Engineer, Rohde & Schwarz
    • Test and Measurement Fundamentals Playlist on the R&S YouTube channel
Re: Scientific publishing
« Reply #20 on: October 27, 2023, 06:19:11 pm »
So, by having an LLM "assist"in writing paper they can now ever more closely resemble existing papers, which are already mostly so badly written as to be unintelligible

 :-DD

I did two Master's degrees in very different fields (Germanic Languages and Electrical Engineering) and in both fields a certain "academic language style" is expected in order for a paper (or presentation / lecture) to be taken seriously. 

One has to wonder how long it will be until someone trains an LLM in the "academic" style for a given language.  It would be interesting to perform a sort of Turning test in which an LLM generates an "academic" paper that cannot be distinguished from one natively written by an academic. 

« Last Edit: October 27, 2023, 06:23:46 pm by pdenisowski »
Test and Measurement Fundamentals video series on the Rohde & Schwarz YouTube channel:  https://www.youtube.com/playlist?list=PLKxVoO5jUTlvsVtDcqrVn0ybqBVlLj2z8
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 7198
  • Country: fi
    • My home page and email address
Re: Scientific publishing
« Reply #21 on: October 27, 2023, 06:48:17 pm »
One has to wonder how long it will be until someone trains an LLM in the "academic" style for a given language.
Don't give them ideas!

Joking aside, I bet there is more than one aspiring company doing just that, with the intent of renting the use of the resulting LLM to academics.

It would be interesting to perform a sort of Turning test in which an LLM generates an "academic" paper that cannot be distinguished from one natively written by an academic.
Many "troll" papers written with the express idea of containing nothing meaningful but having all the necessary buzzwords have gotten published..

Let's be honest: what such a paper would test, would be the time and effort a reviewer is going to put into verifying the references, claims, and results in an article.  My sympathy goes to reviewers who get burdened with such tests.
 

Online SiliconWizardTopic starter

  • Super Contributor
  • ***
  • Posts: 15800
  • Country: fr
Re: Scientific publishing
« Reply #22 on: October 27, 2023, 07:19:09 pm »
One has to wonder how long it will be until someone trains an LLM in the "academic" style for a given language.
Don't give them ideas!

Joking aside, I bet there is more than one aspiring company doing just that, with the intent of renting the use of the resulting LLM to academics.

Of course. This is currently a gigantic pool of opportunities. Everyone and their sister is going to try and tap into the goldmine, in all imaginable fields. Yes, probably even with toilets.
 

Offline thm_w

  • Super Contributor
  • ***
  • Posts: 7527
  • Country: ca
  • Non-expert
Re: Scientific publishing
« Reply #23 on: October 27, 2023, 10:47:20 pm »
Personally I do think there there is some place for language tools in writing, but I don't think its AI. I don't want an AI that helps me with signposts and jargon; thats going to end terribly. Grammar, spelling, is perhaps more useful.. but those tools already exists, and we still have papers full of mistakes (although not every mistake makes them invaluable). So if any, I agree, its not going to change for the better with AI.. but I don't think it will end in disaster neither.

The fact that papers with spelling mistakes exist does not really add anything for or against the argument, IMO. Its like saying code analysis tools exist but people still write code with known bugs. More people should be using the tools no?
Profile -> Modify profile -> Look and Layout ->  Don't show users' signatures
 

Offline mendip_discovery

  • Super Contributor
  • ***
  • Posts: 1024
  • Country: gb
Re: Scientific publishing
« Reply #24 on: October 28, 2023, 08:19:30 am »
I am somone who dislikes writing, and I find it hard to write about subjects for several thousand words. I can talk about a subject for ages, I'm just not that good at getting stuff down on paper.

Part of my University course I had to write a dissertation and it was somthing I really struggled with. If there was a LLM that I could have used I think I might have been tempted to help me write coherent sections as I have a habbit of rambling and repeating myself, ideas come in faster than I can write them and by the time I am half way through the paragraph I have lost what I was thinking of writing.

I dont think I would have an issue if people were to use it to help with making sure that the sentence makes sense or to help clear up a document to make it clearer and more concise. But like with any tool there will be some that abuse it and that is what will ruin the use of it.

I did wonder if requesting that documents that have made use of a LLM, have to state they used one and which one. As they are partly the writer of the document.
Motorcyclist, Nerd, and I work in a Calibration Lab :-)
--
So everyone is clear, Calibration = Taking Measurement against a known source, Verification = Checking Calibration against Specification, Adjustment = Adjusting the unit to be within specifications.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf