I wont bother with a boatload of quotes...
@Westfw.
First, you don't need to do those sorts of computations on 1080 video. You get a stream of data from a blu-ray, send it unchanged to a GPU that has HD video decoding in hardware, and it sends the stream to the display after uncompressing it. The compression is very rudimentary, it's not fixed, there's actually fuzzy logic in the process. One decoder will actually generate a different image than another.
The rest of your post tells me you've given in like the others to thinking that there is some tradeoff that has to be made. Some "bargain with the devil" if you will, to get the system to go faster. There isn't. Your last statement is exactly the mindset of the programmers both doing lazy PC Os work, and wasteful microcontroller work. "Well, here's all this power, let's waste it... What else are we gonna do with it?" Well, there's no reason you can't "Have your cake and eat it too". The desktop can be just as pretty or very nearly so, and still go fast.
I could use the same reasoning you guys are using, but apply it to developers/programmers. Well, we've got all these out of work software developers, maybe they ought to get off their duffs and write some decent code and they wouldn't be out of work. Here they sit unemployed, let's just waste all this programming talent cause we've got lots of it. Same thing isn't it? Sort of ironic I say. Here in the US, there's an epidemic of unemployment for software developers and high tech jobs of all sorts. Yet, all of them are looking for ways to develop things so wastefully in so little time, they are now out of work. Hmmmm.. food for thought. I bet right about now there's a whole bunch of designers of both hardware and software who wish they had some work in Assembly code or 8 bit mcu's to do...
@alm
Meh, gives the GPU something to do. Until the GPGPU stuff really takes off, there's not a whole lot to do for the GPU with typical desktop applications. Yeah, it will probably increase power usage slightly (though less than doing the non-translucent straight corners in the CPU). I'm sure you can easily turn it off if you're so inclined, but unless your app consists of drawing tons of windows on the screen, this is hardly a big deal.
Yes, you can choose a windows classic desktop or linux theme that does not have those rounded corners and turn at least that stuff off. As to it not being a big deal, your system draws the same windows all the time, and yes it is very much a big deal. Very noticeable, as mentioned also by another poster way above here. There is a huge performance increase just in the "Performance" tab of system properties by turning off many of the UI effects. I bet 80% of those are not even noticed by any user (speaking aesthetically now). I leave on a few, but they either have value, or no cost in performance. I use the Translucent Selection window. Full Window Drag. Clear Type Text. There may be 1 or 2 others, but I turn off everything else. Makes a huge difference in everything you do.
Next ppg.
Again, I disagree. I would say for large loads, depending on the data handling, there is less steady state performance to be gained than there is misc gain in all these little wasteful UI processes going on. But unfortunately, those are killing that large workload and keeping it from doing what it should (which is complete the big job). I don't think you're aware of how much this pig UI is crippling even the more efficient parts of the system.
Another comparison. Back in the Win98 days, you had "Smartdrv.exe". A configurable disk cache. It could make disk transfer order of magnitude faster. It is gone now, and there is nothing like it. Now, we have enough ram unused that it is equivalent to the HD size we had then. But, we can't use it for disk cache. Why? Go copy a 1080 Mkv video file from one hard disk to another. With adequate free space, any cheezy hd will read or write 65MB/sec. I can transfer 130MB/sec over my home network. (actually a cheat cause I can move data over the network from pc to pc faster than I can from one drive to another in the same pc). No raid... a 30G file will take eh, 4 to 7 minutes depending on what throughput you get. But, now copy one file, and start a second simultaneous copy. Now, due to the drive access method, the speed will drop and it will alternate between jobs. Now those two files instead of taking 8-14 minutes will take 3-5 hours. Inefficient... Not hard to fix at the OS level. Impossible to fix at the user level. Oh, and this happens while we have 6.5GB of free ram in the system that is sitting idle instead of being used for disk cache. And to add to that, we have 6.5GB of free ram and the OS is still hitting the swap file every second. Go ahead, explain that reasoning to me.
How responsive were photo editing applications on that mainframe? How many HD video streams could they play at the same time?
Unbelievably responsive. yes, they did image work, and in fact, if you look at your current copy of windoze, it'll still have reference to Wang Labs rights to imaging algorithms, very much in use then and now, both on mainframes and pcs. Actually, I worked on a 370 system at a local community college that had a CAD system on it, 32 dumb terminals via sna. Beautiful 24" color monitors. Holy expensive... What would take a pc using Autocad 2 min to draw, it would put on the screen of any of those terminals as fast as the refresh rate. POW! mind blowing speed. Yes, they could have run all the video they wanted.
I was also a dealer for Autodesk then, and there was a company making a video card that had a dongle associated with it, and a "driver" "Optimized" for Autocad. Was all bs. You could put that dongle on any pc and get the full 40x performance increase in regen's and re renders. Just load the "drivers" that fixed the shit code from Autodesk. You didn't need the crappy cirrus logic crap video card they sold for perverse money. This was an entire industry of selling snake oils. Much like now. They Want the stuff to run slow. It's more profitable.
How do translucent windows and thumbnail images in folder views slow down the FFT?
TW's and thumbs (implementation and overhead) slow your system to a crawl, all the time. Mostly its the generic, do-everything-every-time mentality of disk access that kills pc os performance. Win/Linux alike.
But, a better question might be, wth does FFT have to do with the performance topics of either software or hardware at the PC OS level, or the uC level?
Facts? Where?
Facts about software performance. Facts about how the filesystem implementations have far reaching affects to total system performance. About how programming efficiency, even on a uC, has benefit beyond the shortsighted aims of many designers. And, while it may or may not be a fact, it is my opinion that the argument where peoples time isn't worth the performance gains to be had, is patently false. There is a balance somewhere between man hours development time vs system performance / user experience. Ok, I'll agree that. But, you seem very distant in your opinion that the current lack of performance in our OS lets say, is either my imagination, or justified due to todays hardware. I'm on the other end of the world with my opinion, I think with minimal effort, huge gains are there to be had if anyone cared.
To put it better, our tools we have to work with are built with such bloated libraries (which we are forced to use in many cases), that no matter how we try to make our stuff efficient, we're restricted in what we can achieve. It appears to me that you think this justifies our lack of effort, I do not.
The google example fits. Data is data. It doesn't become something other than 1's and 0's just cause it's a blu-ray movie.
They run the same OS that takes half a minute to display the contents of /usr/bin.
Couldn't have said it better myself. I think you just made my point. They took the same bloated OS that won't list a hard disk directory in 5 sec, re-worked the thing stem to stern, and made it search all data on earth in .01