And lost ALL its content upon the unfortunate press of a key.
How long does Python take to do an average comparison between files?
on 200Mbyte of files, we are talking about 12 minutes vs 40-50 minutes.
I do have a new empty text document in Pluma (text editor I use for plain text, including programming short snippets) always within a single keystroke (and a single-character alias, g, in my shell).
I use Pluma for light scripting also. I've got a dark theme and it highlights code and shows line numbers. It the one that annoys me the least. It doesn't seem to like -really- big log files.
Edit: If I'm in the sad position of having to use a Linux system, especially Ubuntu (which is Zulu for "I can't install Devuan") the first things that happen are the complete eradication of nano and vim, after which nvi and ed are installed.
Edit: If I'm in the sad position of having to use a Linux system, especially Ubuntu (which is Zulu for "I can't install Devuan") the first things that happen are the complete eradication of nano and vim, after which nvi and ed are installed.
Up until the early 90's ED was the only editor installed and allowed to be used on the systems I worked on. Once we moved to screen editors, I preferred Vi over EMACS simply because VI was always available and EMACS mostly was not. These days my editor of choice is VIM. I'm curious as to what advantage you see NVI has over VI? NVI was a implementation of VI.
QuoteAnd lost ALL its content upon the unfortunate press of a key.
If your browser is a FireFox clone (maybe Chrome also, but I don't use it so don't know) there is an addon for this:
https://addons.mozilla.org/en-GB/firefox/addon/textarea-cache/
Sits on your toolbar or wherever and just caches textarea doobreys in the background. If you have an upset you pop it open and paste the stuff into the now blank box. Simples.
I only want a "vi", I do not want something that has closet dreams of becoming Emacs. Therefore "vim" goes to the bit bucket.
Performs poorly here..Because his code is written to be slow.
It is obvious code that is ported from C without fully understanding all the features of the language.
He doesn't even use list comprehension.
I had a direct demonstration of this just yesterday.
A friend/colleague of mine was struggling with some data analysis task in Pyhton - he's a learner in both.
The code he wrote was taking hours to complete, and he asked me for some guidance.
I'm not an expert in either subject matter, but as soon as I saw his code I told him: "You have a number of explicit nested for loops. Turn them into list comprehension and slicing, check the library (pandas) for some smart indexing".
Twelve hours to a couple of minutes.
I had a direct demonstration of this just yesterday.
A friend/colleague of mine was struggling with some data analysis task in Pyhton - he's a learner in both.
The code he wrote was taking hours to complete, and he asked me for some guidance.
I'm not an expert in either subject matter, but as soon as I saw his code I told him: "You have a number of explicit nested for loops. Turn them into list comprehension and slicing, check the library (pandas) for some smart indexing".
Twelve hours to a couple of minutes.
The thing is, in a "real" programming language it doesn't matter whether you use a built in construct such as foreach in a container, or write it out explicitly yourself -- the speed will be the same either way (assuming the same algorithm).
It's still good to have things such as foreach and iterators and list comprehensions and array slicing in a real programming language, but it's notational convenience only -- less text to write, less to read. In Python / Ruby / Perl it's critical to use that built in high level stuff as much as possible.
Saying that "No matter what sort of data structure I choose, the program will be fast"
Many times the problem is not the speed but the management of 80% of the code, very extensive and prone to maintenance difficulties. That's where Python stands out and is better than C.
According to Pareto's law, 80% of the code only takes 20% of the execution time, while 20% of the code takes 80% of the execution time.
Normally you should only optimize 20% of the code to gain a lot of speed.
In Python this 20% of the code can already be optimized with external libraries, sometimes written in C (like numpy).
According to Pareto's law, 80% of the code only takes 20% of the execution time, while 20% of the code takes 80% of the execution time.
Normally you should only optimize 20% of the code to gain a lot of speed.
That's true if you write all the code in the same language (or at least similarly efficient languages).QuoteIn Python this 20% of the code can already be optimized with external libraries, sometimes written in C (like numpy).
Let's try a thought experiment.
We have some task written in C (or Java or C#) that takes 10 seconds to run:
8 seconds: the important 20% of the code
2 seconds: the "unimportant" 80% of the code that you want to be easy to maintain
We rewrite 80% of the code in Python. Python runs 50 times slower than C. Now we have:
8 seconds: the important 20% of the code (still in C)
100 seconds: the Python part
Total runtime just went from 10 seconds to 108 seconds. The Python code is taking 92.6% of the time.
I sure hope it's a LOT easier to maintain to make it worth it.
Or, it had better be more like 99%/1% in the C version, not 80%/20%.
I'm sorry, but but what you are saying is bullshit, and goes against basic principles of computer science.
Saying that "No matter what sort of data structure I choose, the program will be fast" is fundamentally wrong, just try to find some data in a binary tree vs an array.
Yes Bruce, but it depends on the deployment. If it finds it's way into the hands of normies like me, whilst, yes, C is preferable, a python script with low bandwidth in the grand scheme of things can be adjusted easily without having to figure out what the dev was smoking with the whole C tool chain.
Yes Bruce, but it depends on the deployment. If it finds it's way into the hands of normies like me, whilst, yes, C is preferable, a python script with low bandwidth in the grand scheme of things can be adjusted easily without having to figure out what the dev was smoking with the whole C tool chain.
If you find C dangerous, Java and C# are just as n00b safe as Python, but run within a factor of 2 of C. Probably within a factor of 1.2 much of the time.
Why is it that, one this site, there are discussions that devolve into identity politics, and there's always an individual, or two, whose posts consist of spastic thoughts just vomited out.Have you ever had a discussion in identity politics, or in plain politics, that didn't eventually devolve into spastic thoughts just vomited out?
I've found that applies to just about everything concerning opinions in general.
The opinions themselves, in my opinion (HA!), are basically worthless. What is interesting and important, is the reasons and experiences behind those opinions, because only by analyzing and comparing those, can one constructively build and rationally/logically examine ones own opinions. I do that all the time, and I've found it extremely useful and helpful in various aspects of my own life.
That's why I asked why anyone should concern themselves with programming language popularity in any way –– except possibly when learning ones first programming language, or desperately seeking employment as a programmer. I don't, but I know almost nothing about anything anyway, so I'm interested if anyone has some reasons I don't know about.
When the discussion devolves into combating opinions, I only participate when I believe the opinions are based on incorrect or non-factual or incomplete reasoning, and try to explain the issue, and ask how people think that affects their opinion. (However, I only do "technical" English, and have basically zero skill in such social subtext and niceties, so I fail English often here.) Especially opinions that differ from mine interest me, because their basis could be something I'm not aware of.
But when the opinions devolve (like in a recent Devuan thread) into "I'm a master in this, and I don't see the problems, so you must be wrong" without even checking the facts, I too get so irate I start spewing poor spastic counteropinions. Sorry about that, but we're all only human. Besides, online the bandwidth is too small to properly express the emotional content and context that would defuse/inhinit such emotive reactions and spastic outbursts. Even sarcasm and jokes are easily misunderstood.
According to Pareto's law, 80% of the code only takes 20% of the execution time, while 20% of the code takes 80% of the execution time.
Normally you should only optimize 20% of the code to gain a lot of speed.
That's true if you write all the code in the same language (or at least similarly efficient languages).QuoteIn Python this 20% of the code can already be optimized with external libraries, sometimes written in C (like numpy).
Let's try a thought experiment.
We have some task written in C (or Java or C#) that takes 10 seconds to run:
8 seconds: the important 20% of the code
2 seconds: the "unimportant" 80% of the code that you want to be easy to maintain
We rewrite 80% of the code in Python. Python runs 50 times slower than C. Now we have:
8 seconds: the important 20% of the code (still in C)
100 seconds: the Python part
Total runtime just went from 10 seconds to 108 seconds. The Python code is taking 92.6% of the time.
I sure hope it's a LOT easier to maintain to make it worth it.
Or, it had better be more like 99%/1% in the C version, not 80%/20%.
(Of course 108 seconds is better than the 500 seconds it would take if you wrote it all in Python ... but 10 seconds is even better)
- far more suited to long term results instead of newbie appeals