| General > General Technical Chat |
| UK officials use AI to decide on important issues |
| << < (3/6) > >> |
| coppice:
--- Quote from: themadhippy on October 24, 2023, 12:48:39 pm ---not sure which is worse,the old system using muppets or the new using robots. --- End quote --- A lot of things listed in that article are areas where I wouldn't put much faith in the average civil servant to make a well informed decision. The important questions about using AI instead of a human are what is their percentage of mistakes, and how severe they are? My guess with current technology is less mistakes, but quite a few mistakes being seriously egregious ones. When a pattern matching system gets a match wrong its errors can be hilarious, but if that error affects your life its not so funny. |
| ebastler:
--- Quote from: Neutrion on October 24, 2023, 02:24:55 pm ---Are you talking about real AI, or some very basic algorythm? In case of the second one, yes you are all right, and it is easy to doublecheck things. But If it is about AI I think it is going to be very confuse what kind of pattern recognition there is, and on what level humans are intervening, if they are doublechecking things at all, which defies the systems purpose. REAL AI is complex and not just sorting out very easy stuff. And finding out how reliable a self modifiyng algorythm is, and doing that constantly, -as it is changin itself- will be almost impossible. --- End quote --- Not sure how familiar you are with neural networks used for pattern recognition? They would certainly be considered "real AI" in my book, since neural networks are underlying the vast majority of today's AI solutions. And no, using them does not automatically imply that the algorithm keeps changing. It is very common to train a network, then freeze that state, test thoroughly and deploy. Look, guys -- I am not by any means an expert on neural networks, but have worked with a team which has developed them for various targeted applications. I have not read a single post here which indicates familiarity with the technology; just gut responses and knee-jerk reactions. I don't think there is value to this discussion, so I'm out. |
| TimFox:
The old data-processing term "garbage-in, garbage-out" still applies to pattern recognition on an ill-chosen but huge database. I would like to see a detailed analysis of what went wrong in the well-publicized "hallucination" of non-existent and false legal cases cited (in proper format) in the infamous court case (May, 2023), which were somehow generated by ChatGPT but not verified by the naïve lawyers who used it as a search engine. |
| coppice:
--- Quote from: TimFox on October 24, 2023, 05:33:52 pm ---The old data-processing term "garbage-in, garbage-out" still applies to pattern recognition on an ill-chosen but huge database. I would like to see a detailed analysis of what went wrong in the well-publicized "hallucination" of non-existent and false legal cases cited (in proper format) in the infamous court case (May, 2023), which were somehow generated by ChatGPT but not verified by the naïve lawyers who used it as a search engine. --- End quote --- I don't think ChatGPT is like the AIs currently being used by governments. Government interest today is mostly in sophisticated pattern matching, which is what the things in the Guardian article need. ChatGPT starts to get into trouble when it acts in a generative manner. Generative AI tries to take baby steps beyond pattern matching, and towards something we would consider intelligent behaviour. Like any infant it stumbles a lot. Current pure pattern matching is much more effective. |
| TimFox:
I can easily see turning a competent AI system loose on a corporation's full data archive to generate a good annual report. Similarly, turning it loose on the complete published works of English authors between Marlowe and Donne (including Shakespeare and the King James translation) could reveal useful results. Recently, there was a semi-comic suggestion on this site that we should all post claptrap here in order to trick the omniverous data-skimming AI systems into producing nonsense. (As can be seen in other threads, some posters apparently think this is a good idea.) |
| Navigation |
| Message Index |
| Next page |
| Previous page |