| General > General Technical Chat |
| Why does OpenAI ChatGPT, Possibly Want to disagree/annoy and change my eating... |
| << < (17/22) > >> |
| MK14:
--- Quote from: bigfoot22 on January 04, 2023, 02:52:12 pm ---Its so incredibly dangerous that it blows my mind that corporations are researching it in the first place. --- End quote --- I suppose an analogy, would be what if someone invented a new Nuclear Bomb, technology. Which used fusion as well as fission. So that a normal person, with access to a computer, 3d printer and some common, readily available materials, such as chemicals. Could construct such a device, which is x1,000 times as powerful, as the biggest hydrogen bombs we have today, just uses water as the fissionable material (via a hypothetical fusion initiating technology). Then on the open web (internet), publishes those working plans/details, as a large PDF file, with all the details. Such that just about any competent University graduate level person, could construct such a device, for $3,000, and it takes 2 weeks to make, and can be done in secret (i.e. doesn't emit radiation or chemical traces, so the authorities, can't readily detect who is trying to make such devices). Then sooner or later, some mad/bad/crazy/lunatic/etc person(s), could decide to create such horrible weapon(s), and launch them on some unsuspecting (or suspecting), nation or nations. In other words, I think the genie is out of the bottle, and it is probably way way too late to stop this AI thing (whatever you want (or not) to call this thing/technology). Because some crazy/inappropriate individual/organization/country, may go too far with AI (IF that is technically possible, sooner or later). On the bright side, I don't think it is possible (now, the future is a different matter) and/or being done at the moment, as we are not that far advanced with it yet, to make genuinely, generally intelligent, (AI) machines/entities. tl;dr Learn to bow down and say 'SIR!', whenever you walk past or near, any robot/supercomputer or suchlike. :-DD |
| tom66:
We do urgently need regulations on this kind of technology. But, that being said, what GPT is, is essentially a really good language model. Language is a small part of what makes us human. It's probably the most defining aspect that separates us from the great apes, but it's not intelligence in its own right. GPT is very good at taking what others have derived, processing it in some way, and creating something based on that data. What it cannot do is create new knowledge or information. I do not know if we will be able to reach an AGI (artificial general intelligence) within our lifetimes but even if we did, if it is running at a scale similar to what has been proposed for GPT-4, it will require an entire Facebook or Google-type datacenter, to operate similarly to a human brain. That is unless massive improvements in existing learning and neural network models are made. No computer processor comes even slightly close to the density and interconnectivity of the human brain, and we are rapidly approaching the limit of Moore's Law, though it does seem to be always a bit further away. To get to an intelligence far superior than a human brain, who knows how much resource could be required? We don't know if it will scale in a linear fashion or an exponential fashion. What would such a system look like? Could it collectively beat 100 humans working together to stop it? Could it even be made moral? If it requires some kind of nuclear power plant to keep running, it wouldn't exactly be that difficult to build in literal failsafes, in the form of large off-network circuit breakers. AI is more of a danger in other areas, in that it creates problems for the concept of a capitalist model of nearly full employment. If you have self-driving cars and trucks, you have already eliminated some 15% of ALL jobs. If you add call centers, paralegals, receptionists, data entry to that then you have probably eliminated another 5-10% of all jobs. It's possible such tools could even eliminate some teaching and lecturing roles, based on how well they can process existing knowledge (though further work is required on their 'bullshitting' behaviour.) Very generally, these jobs tend to be more at the lower end of the income spectrum, where the cost of further education is climbing away from people already too quickly. So you could create an entire underclass of people who have no marketable skills and for whom a job just doesn't exist any more. I liken it to how coachmen and farriers were obsoleted by the car, but they had decades to adapt; we could be looking at a transition far shorter than a decade. I know someone who just started a degree in creative writing - GPT can already write convincing short stories and it won't be too long before it can replace the lower end of non-fiction writers. A biographer might be able to distill facts learned from an interview into a full novel on someone in a matter of days, combine this with data from the internet, instead of the current months-to-years it takes to compose such material. Technical writing on well known subjects could also become far simpler, the human would only be involved in editing and review. For programming, and software development in general, there is a small risk. But writing software is more than language. I think LLM's will act as a productivity boost well before they replace the bulk of programming. You would need an AGI to put actual programmers out of work. So I am not that worried just yet, I do not think we are in grave danger from large-language models, but I would say that the political class has not prepared in any way for what AI means for the future, and that's kind of scary. But it's this kind of blind hope that everything will work out, a bit like how most politicians have approached climate change. |
| MK14:
--- Quote from: tom66 on January 04, 2023, 05:00:20 pm ---If you have self-driving cars and trucks, you have already eliminated some 15% of ALL jobs. If you add call centers, paralegals, receptionists, data entry to that then you have probably eliminated another 5-10% of all jobs. --- End quote --- In practice, I don't think it would pan out like that, as such. E.g. In many supermarkets, these days (in some countries), there are what amount to robotic shopkeepers/machines, and usually real shopkeepers/till-people as well. BUT not everyone likes to use those automatic units/tills (whatever you want to call them). In some cases, they can get very annoying. Where (all too frequently), they keep on beeping (or similar), and refusing to accept one or more items, that you are trying to buy. Also the human element, of being able to talk to the till person and/or talk to the delivery driver, and thank them. Is lost. Which some people DON'T like. So, although it may well save staffing costs in the future (automating deliveries and receptionists etc). It could be unpopular with customers, and businesses, don't want to upset customers in that way, otherwise it can loose them sales/profit. As a rule of thumb, if computer/machine/robot technologies future developments, replaces certain job types. Society can expand their desires and range/quantity of products and services they desire to obtain/buy. So that nearly 100% of people who are fit, ready, and want to, can get a job. Even if it is not in those particular roles. E.g. A window cleaning business, gets replaced by affordable window cleaning robots. Which automatically, can go round a building and clean all its windows. But that can create new jobs, for people to sell, repair, maintain, design, those robot units, and instruct/show new customers, how to use those new window cleaning robots. E.g. The person who use to run their own window cleaning round, and make a living from it. Can retrain, as a person who goes to peoples homes, and trains them how to use these new window cleaning robots, how to unpack them, from the packing box. How and where to store them, and how to fix common ailments, as well as other stuff, they need to know. Alternatively, they may buy such a robot, and take it to peoples homes (who can't afford such expensive robots and/or don't want one, for various reasons), and let it clean peoples windows, for a suitable fee. That might even allow people who couldn't actually be a window cleaner themselves (e.g. they can't stand the heights of climbing a ladder). But instead can happily take that window cleaning robot, to peoples homes. |
| Zero999:
Going by the rate of progress, I can't see AGI being developed any time soon. Computers are already powerful enough to achieve it, but whether it's possible for them to think in that manner is another thing. |
| tom66:
--- Quote from: Zero999 on January 04, 2023, 06:21:32 pm ---Going by the rate of progress, I can't see AGI being developed any time soon. Computers are already powerful enough to achieve it, but whether it's possible for them to think in that manner is another thing. --- End quote --- What makes you believe computers are able to achieve AGI already? The human brain is 'estimated' to have somewhere around the equivalent of 10^15 MIPS capability -- the difference is the brain is really an interconnected analog computer rather than a digital logic circuit. The fastest processors are still around 5-6 orders of magnitude away if we look at the capability of typical neurons and how they are connected, and how they can change their connections, in the human brain. Even specialised processors, e.g. vision neural net processors, are around 3-4 orders of magnitude away. Not to forget that the human brain uses about 20 watts to do that yet the Tesla autopilot computer can maybe pilot a vehicle autonomously on close to 10x that. We still have a long way to go before we're at the same level. And we'll probably need semiconductor die about 30cm in diameter with way more than one layer. |
| Navigation |
| Message Index |
| Next page |
| Previous page |