Code bloat or the equivalent data puke, means we have far too many computer resources to know what to do with.
WindowsDefenderprobably has its own ozone hole at the moment :-DD
One thing to consider is whether the alleged additional energy required for developers to do a leaner job (and as bd139, even that is questionable) would outweigh, or at least be on par, with the cost of software bloat in terms of energy?
It takes time, energy and thought to reduce code size. All of these are of limited availability in development.
In ending “the bloat” software devs are happy to follow EE: just lead the way! Stop using readily available and tested discrete components, which waste board space and underutilize characteristics. Instead roll out a new custom chip for each problem. Of course optimized so no watt or square millimeter is wasted.
The alternative is to understand how things actually work. Then you will grasp why, even with most perfect approach and hypothetical hyper-optimised software, for nearly all software products you will not reach improvement reaching even an order of magnitude. Or you may remain in the comfort zone of throwing shit, turning blind eye towards your own domain and imagining a world that never existed.
One thing to consider is whether the alleged additional energy required for developers to do a leaner job (and as bd139, even that is questionable) would outweigh, or at least be on par, with the cost of software bloat in terms of energy?There's a big difference between software used by 10 people and software used by a billion people.
For 10 people it's almost always going to be cheaper to buy them a faster machine. Each. It is at least actually a valid engineering and financial tradeoff.
It also depends on whether the costs of hardware to overcome software bloat are borne by the company itself (e.g. web servers) or by their customers (e.g. Microsoft Windows and Office).
Back i the day i was early on the smartphone bandwagon and got a phone running Windows Mobile (Not to be confused with Windows Phone, that stuff is garbage) and it ran on a 133MHz ARM CPU with 32MB of RAM later on moved on to a PDA with a 500MHz ARM with 128MB of RAM .
Back i the day i was early on the smartphone bandwagon and got a phone running Windows Mobile (Not to be confused with Windows Phone, that stuff is garbage) and it ran on a 133MHz ARM CPU with 32MB of RAM later on moved on to a PDA with a 500MHz ARM with 128MB of RAM .
That is very fast and a lot of RAM!
When I was working in a small company writing and selling a J2ME to native compiler for Verizon phones in 2006 to 2008 the typical phone had a maybe 16 MHz ARM7TDMI (running on a 16 bit bus) and 400 KB to 2 MB of free RAM. Some of the lower end phones (and we had literally 200 different models in the office for testing purposes) were I think as low as 1 MHz to extend battery life.
I think your understanding of the splash screen is probably wrong. That's a marketing interstitial mostly which can be optionally used to hit the network to try and get the initial screen if it needs it. Most of the apps on my phone at least don't have one and are 100% instant starting. My iPad, which is basically the same thing but bigger, is the only computer I've had that can actually keep up with me it's that fast.Maybe it was because i was running WindowsMobile on the more powerful hardware of the time.
As for windows mobile, I did a lot of development on that for POS and POD systems. I think you have some faulty memories there. It was painful at best.
Code bloat or the equivalent data puke, means we have far too many computer resources to know what to do with.
Sort of. At the moment, the constraining cost is energy. That's starting to burn the end users now.
Watch the race to the bottom on energy usage. Which is why ARM is popular suddenly.
Eventually the software will be the constraining factor.
Bloat exists because it's currently cheaper to leave piles of shit everywhere than clean it up.
Bloat exists because it's currently cheaper to leave piles of shit everywhere than clean it up.
There's other reasons. One particularly pernicious one is where marketeers have to specify whats in product N+1. If those marketeers are inexperienced in the product domain, they will say "just like product N because we need to continue to sell it, but add X and Y". Obviously that leads to a wart being added to the hairball.
But if you ask knowledgeable customers, they will sometimes say " don't duplicate N, do something better".
Software will push hardware harder and harder to do the "dirty work" of drawing less energy.
By the time it's really *considered* the constraining factor, software development will likely have been handed over to AI, so we'll have only machines to blame.
:-DD
Keep in mind nobody knows if there's such a thing as software "engineering".
This has been an ongoing debate for decades.
No, SiliconWizard didn't claim that. He wrote that nobody knows ... ongoing debate.Keep in mind nobody knows if there's such a thing as software "engineering".Ok, if software is not "engineering", but a joke.
This has been an ongoing debate for decades.
No, SiliconWizard didn't claim that. He wrote that nobody knows ... ongoing debate.Keep in mind nobody knows if there's such a thing as software "engineering".Ok, if software is not "engineering", but a joke.
This has been an ongoing debate for decades.
Some of this may be generational, or following previous designs even when not appropriate.
Case in point, we had some folks visiting from a commercial company with a prototype of a radar they wanted to test alongside some of our equipment. Their RF guy was there to operate the system. He didn't have any software background, but was running code written by their software guy. My jaw dropped a couple of feet when I saw him spin up a docker instance (!!!) to acquire the data from an Ettus Research USRP SDR. Then there was another docker instance to process the data and Matlab to view it. WHY??? I could understand using Matlab for a quick prototype, but docker? Geez!
Sounds like their software dev was someone who had used docker for developing web stuff, and was asked to work on a data acq. system with the above results. They probably consider it either old fashioned or black magic to just write a program to connect to a TCP socket and store the data to disk.
Perhaps the software guy was wise enough to try his level best to ensure that the prototype software he was tasked to develop would never be shipped to end users...Maybe so, but if you have a 'batch mode' process like this, you have to wait for your half-hour of data is recorded before you know if it works, or if the ADC cable was disconnected (yes, this happened). It's just not fit for purpose.
They probably consider it either old fashioned or black magic to just write a program to connect to a TCP socket and store the data to disk.If you have netcat, it'd be just one command: netcat -d hostname-or-address port > file.
Frameworks are all the rage and most will be dead in 5-10 years' time.
Frameworks are all the rage and most will be dead in 5-10 years' time.
Most current frameworks will be dead in a decade, some won't.
New frameworks and languages will arise, will make mistakes that were known and soved 10 years ago, but HR departments will insist on hiring newbies that don't know enough to ask the right questions.
Legacy supoort, web frameworks and so on. Not reinventing the wheel and not wanting to remove any existing wheels.Worst company for bloatware is Microsoft.
Literally, my first result was 'Trainee Drainage Engineer' for 'all aspects of drain and pipe cleaning'. So, there you have it, engineers chasing turds away.
Legacy supoort, web frameworks and so on. Not reinventing the wheel and not wanting to remove any existing wheels.Worst company for bloatware is Microsoft.
They build so many layers to software and that its almost impossible to copy or replace.
Windows, .net framework, .net core, WPF, Winforms etc etc
Legacy supoort, web frameworks and so on. Not reinventing the wheel and not wanting to remove any existing wheels.Worst company for bloatware is Microsoft.
They build so many layers to software and that its almost impossible to copy or replace.
Windows, .net framework, .net core, WPF, Winforms etc etc
That’s all the good stuff.
If you want to poke something start with Teams and look at all the shit they are building on top of layers of shit in typescript. The next major version of office is in typescript and electron.
Yep and it’s getting worse each release. Half the ancillary crap that runs behind the scenes and the remote host for remote machines eat a ton of RAM and resources. I’ve dumped it and gone back to vim recently.
:%s/foo/bar/g for that.
Yep and it’s getting worse each release. Half the ancillary crap that runs behind the scenes and the remote host for remote machines eat a ton of RAM and resources. I’ve dumped it and gone back to vim recently.
I personally like code editors that provide good autocomplete and Microsofts Intelisense does a rather good job. These autocomplete features help make me more productive by not making me remember the exact name of everything in my code and saves time with me having to look up what a structure contains or getting it wrong and finding out the name typo at compile time. This is especially true when using outside libraries that i never used before and don't know any of the functions from by head (the whole function prototype appears right there as i type it). Since you can just hit a key and have it autocomplete for you also speeds up typing, reduces chances of a typo and removes the discouragement of using longer names. At the end of the day it helps me get my coding done faster. Computers are better at remembering things than i am. If this comes at a cost of a gig or two of memory so be it, my machine has 32GB.
Just so.
But it also needs the right language. Good luck with autocompletion with, for example, Forth or Lisp!
Autocompletion has one other virtue that newbies don't understand. Too often you see statements to the effect that "language X requires s too much typing" and that my language is better because it uses duck-typing. Autocompletion removes the "problem" of too much typing. (Personally I'd rather the compiler told me I'm wrong, rather than have a run-time error.)
autocomplete crap freaking annoys me to no end, so that's definitely not for me. I don't need
Give me good syntax highlighting, and sensible auto-indentation (which VSCode doesn't have IMHO, but that will of course be highly subjective).
As to a language needing "too much typing"? As I say on a regular basis, if, when developing software, the amount of typing needed takes any significant amount of time/energy compared to the design effort, then you're either writing trivial stuff or you're doing something really wrong. Just my 2 cents. :popcorn:
autocomplete crap freaking annoys me to no end, so that's definitely not for me. I don't need
Give me good syntax highlighting, and sensible auto-indentation (which VSCode doesn't have IMHO, but that will of course be highly subjective).
As to a language needing "too much typing"? As I say on a regular basis, if, when developing software, the amount of typing needed takes any significant amount of time/energy compared to the design effort, then you're either writing trivial stuff or you're doing something really wrong. Just my 2 cents. :popcorn:
autocomplete crap freaking annoys me to no end, so that's definitely not for me. I don't need
I *hate* things flashing up when I'm trying to think and type. If you have to hit a key to get the suggestions then fine.
As for reducing typing, emacs' M-/ command works fine for me. It searches first backwards in the current file, the from the end of the current file, and then other open files, for a word starting with the characters to the left of the cursor, and completes it. That's all you need for not repeatedly typing long and descriptive names.QuoteGive me good syntax highlighting, and sensible auto-indentation (which VSCode doesn't have IMHO, but that will of course be highly subjective).
I'm not sure what is "good" syntax highlighting.
String literals and comments different to other things maybe. I don't need my freaking editor to remind me which things are keywords and type names. I'd rather have plain black text than an editor that looks like a Fisher-Price toy.QuoteAs to a language needing "too much typing"? As I say on a regular basis, if, when developing software, the amount of typing needed takes any significant amount of time/energy compared to the design effort, then you're either writing trivial stuff or you're doing something really wrong. Just my 2 cents. :popcorn:
100%.
I'm not sure what is "good" syntax highlighting. String literals and comments different to other things maybe. I don't need my freaking editor to remind me which things are keywords and type names. I'd rather have plain black text than an editor that looks like a Fisher-Price toy.(I prefer gray text on black for whatever reason, and as little of the editor showing as possible; only the text I'm editing.)
As to a language needing "too much typing"? As I say on a regular basis, if, when developing software, the amount of typing needed takes any significant amount of time/energy compared to the design effort, then you're either writing trivial stuff or you're doing something really wrong. Just my 2 cents. :popcorn:I fully agree! And I often refactor/rewrite my own code.
And I often refactor/rewrite my own code.
The only occasional use I make of it is to edit markdown files. It has an extension for previewing the result, which is handy. If you can point me to a markdown viewer that is not web-based and that works well, I'll be glad to switch!
Totally irrelevant questions IMO. Real software bloat are unused bytes that can be thrown out without compromising any of the end results.
No company will ever invest millions of $ for 20-30% in code reduction, they invest in the next thing. Why ? Because new things have a ROI, while code reductions and cleaning only costs money.
Not fully agree with the last statement, that code reductions/cleaning only cost money.Ofcourse as a SDev I know that and you know that. The problem I have encountered the last 22+ working years is that management that has to decide where to spent the fte's on does not see it like that.
Most managers nowadays get rated by short term objectives. Time periods of 1 yr max. I never saw a manager that would get a bonus if their software stack would be improved over 5 years time.
And in my work experience the last 8 years in a SAFe wow it was always what can you demonstrate us you did the last sprints or PI ? It is very difficult to demonstrate you cleaned up the software bloat or architecture and then show the product behaves exactly the same way.
They think engineers always want to make things more perfect than necessary,Which is true. This thread is proof of that.
because with efficient software a 10 year old computer can still do everything users need it to even after many years of use
Moore's law says computing power increases exponentially. But without a matching increase in bloat the computer industry would soon be reduced to the same status as toilet paper. Actually worse, because with efficient software a 10 year old computer can still do everything users need it to even after many years of use, whereas toilet paper can only be used once.
Which is to gamble that the inevitable crash does not occur while you have a stake in the game, even when it means that fixing the system is out of the question and the ensuing crash will be harsher and more violent to the people who have to live through it.They think engineers always want to make things more perfect than necessary,Which is true. This thread is proof of that.
People who complain about bloat wasting resources and reducing performance don't understand the goal of the system.
People who complain about bloat wasting resources and reducing performance don't understand the goal of the system
Our capitalist society is based on convincing others to pay you for the goods and services you produce.
People who complain about bloat wasting resources and reducing performance don't understand the goal of the system
Our capitalist society is based on convincing others to pay you for the goods and services you produce.
Yes, tell us about bad capitalist "system". Perhaps also mention a good communist / socialist "system"? How lovely, careful and tender it is, with no consumerism and careful to the people and the planet.
Yes, tell us about bad capitalist "system". Perhaps also mention a good communist / socialist "system"? How lovely, careful and tender it is, with no consumerism and careful to the people and the planet.
Oooh politics. You don't look good by assuming that engineers won't spot the fallacy in
- X is bad
- therefore Y must be good
- where Y isn't not(X)
private ownership of resources and land (*including* endangered environments and habitats) is the best way to preserve them.There is no ownership of land.
Voluntary exchange is the best way to organise an economy, and widely-distributed private ownership of resources and land (*including* endangered environments and habitats) is the best way to preserve them.According to historical record, this is true. Add limited competition (i.e., encourage competition but discourage or ban monopolies and cartels), and you'll get even better results for the humans involved. It also does not exclude commons.
At best you are a temporary caretaker of the land.Isn't that exactly what ownership (of land) means? It is to me.
Oooh politics. You don't look good by assuming that engineers won't spot the fallacy in
- X is bad
- therefore Y must be good
- where Y isn't not(X)
Law of Excluded Middle.
You decide for yourself what to do with your time and your property, or someone else tells you what to do with them. There are literally no other options.
The label placed on the "someone else" and their central planning is irrelevant.
Some projects are just so complex that implementing a lot of it yourself would take too much time. The developers of the library probably also put a lot of work and knowledge in that library to make it perform well. You don't want to reinvent the wheel after all.
module.exports = leftpad;
function leftpad(str, len, ch) {
str = String(str);
var i = -1;
if (!ch && ch != 0) ch = ' ';
len = len - str.length;
while (++i < len) {
str = ch + str;
}
return str
}
Some projects are just so complex that implementing a lot of it yourself would take too much time. The developers of the library probably also put a lot of work and knowledge in that library to make it perform well. You don't want to reinvent the wheel after all.In that case, you should support the library, however.
Also it is hard to tell how good a library is when you come across it. You only truly see how usable or performant it is once you tried using itExactly; which is one of the reasons I myself try out all kinds of stuff as a "hobby".
Yeah, such as this library in node.js's NPM repository:
In that case, you should support the library, however.
If you work on a commercial product, it is an investment into your own business.
If you work on a noncommercial product, show your support by letting the upstream know you're happy for the work they do, and when you find a problem (or one of your downstream users reports a problem), you filter and check the bug report and pass it upstream –– or better yet, investigate it and provide suggested patches with full problem description upstream.
This is just enlightened self-interest, in my opinion.
I could see the management being furious about someone submitting a git pull request for a library fix that there employee developed during work hours that they are paying for. Reasons ranging from "Why should we be paying to fix bugs in some guys crappy library" to "This is company property, what is this GPL you are whining about" to "We don't want the competition to use OUR fix". This is probably part of the same reason of why service manuals don't have schematics anymore.
I could see the management being furious about someone submitting a git pull request for a library fix that there employee developed during work hours that they are paying for. Reasons ranging from "Why should we be paying to fix bugs in some guys crappy library" to "This is company property, what is this GPL you are whining about" to "We don't want the competition to use OUR fix".That's exactly where the understanding of how open source works comes in. I have, and will point out to that management person that that means that "We don't want the fixes our competition does to the core parts that we too use, in OUR products; we want ours to be crappier." If they start getting red, I'll calmly explain how the ecosystem works for a company whose life depends on profit. The key is that only an idiot who no self-respecting shareholder would hire into management would value small short-term savings over huge long-term profits.
I am not that much about Open Source i use Windows even. But i do provide feedback to open source projects when i have something valuable to share. If i fixed a bug or added a feature to an open source project i share it with the developers. It doesn't cost me anything to share it and the developers are pretty much always happy to see it.Like I said, doing that, and fostering a culture that favours developers doing that, is just plain enlightened self-interest.
A lot of people don't understand what open source is.Nintendo wrote the code for their games, so it belongs to them. They make money from it by selling the game cartridges and the consoles to play them on. Any method used to get around that potentially loses sales for Nintendo. How is that an example of open source 'helping' them?
Lots of companies perceive open source as a threat rather than a help. Classical example of this is Nintendo. They have a huge legal department suing people left right and center for doing pretty much anything to anything with a Nintendo badge on it. According to them you are not even allowed to share video footage of there games running on legit hardware. They try to shutdown people developing emulators for there systems (but fail to do so because it is not ilegal). They even threw DMCA claims at YouTube channels that show how to use emulation. There was a effort to decompile SuperMario64 back into C code. They threatened legal action to it too, even tho it contained no copyrighted data from the original ROM cartridge
Nintendo wrote the code for their games, so it belongs to them. They make money from it by selling the game cartridges and the consoles to play them on. Any method used to get around that potentially loses sales for Nintendo. How is that an example of open source 'helping' them?The same way availability of books free of charge increases the sale of books by the same author, as discovered by Baen Books' Baen Free Library (https://en.wikipedia.org/wiki/Baen_Free_Library). It was founded in 1999 by Eric Flint, an author himself, and publisher Jim Baen.
The reason Nintendo try to stop people making emulators is obvious - because they know people will use them to play pirated games.Yet, there is no actual data backing that "knowing". In fact, actual experiments and statistics points to the opposite.
Actually, less e-junk and wasted resources would be much better for us to survive on the long term. Based on your reasoning we should limit the lifetime of cars to 5 years, instead of using them for 15 to 20 years or even longer. >:DIn Japan they effectively do limit the lifetime of cars to 5 years. Due to that policy I was able to buy an 8 year old imported Nissan Leaf when I never could have afforded a new one. Apart from the battery the car is practically like brand new. In Japan Nissan has a replacement program for old batteries, but I live in new Zealand where the Nissan agents aren't at all interested in doing it (either for imports or cars sold locally through them). They want to me to buy a new Leaf - or better yet some polluting gas car that they can make more money out of servicing - every 5 years or so.
Books, computer games, they're exactly the same - right? Wrong.
The same way availability of books free of charge increases the sale of books by the same author, as discovered by Baen Books' Baen Free Library (https://en.wikipedia.org/wiki/Baen_Free_Library). It was founded in 1999 by Eric Flint, an author himself, and publisher Jim Baen.
So you say. But it's not up to you to decide. If Nintendo don't really 'know', and could actually sell more games by letting people play free copies on their PCs, then it's their choice to not do so. You are not on their board of directors, so you don't get a say. One thing for sure though is that Nintendo is one of the few console game producers still in business, and was last year rated the richest company in Japan. They didn't get there by making stupid decisions.The reason Nintendo try to stop people making emulators is obvious - because they know people will use them to play pirated games.Yet, there is no actual data backing that "knowing". In fact, actual experiments and statistics points to the opposite.
Except for a few bad apples who try to make a profit off others (by clandestinely using open source in their commercial projects, usually breaking the license; or by making fake cartridges or copies of games and selling them), a typical "pirate" actually spends more money on games than those who do not "pirate" at all. Companies didn't ditch DRM because they decided to be lenient towards "pirates"; they did so because it hurt their sales.Yes, DRM hurts sales. But if it wasn't for pirates (which on some platforms is the majority of users if you don't have some DRM) it wouldn't be necessary. Game cartridges are themselves a form of DRM, because they are expensive for the individual to copy and fake cartridges can be impounded. Game console makers got by for years because of this. However today - with modern PCs, emulators, and the net - users don't need to buy any hardware and distribution is essentially free. The only people making money from that are computer hardware vendors, ISPs, and Microsoft.
But by all means, do take your personal beliefs stemming from stereotypes and other peoples beliefs as "knowledge" and "facts", and completely ignore what results have been discovered by actual independent research. No, I will not give you any sources or links...I don't need links or studies to show me the practical difference between open source and closed source software. Right now I am using a PC running Linux Lite, which is practically the same as Windows but totally free. How many others are doing so? How much are the authors making from it? All Linuxes combined currently have 2% of the desktop market. Windows has 76%, MacOS X has 16%. Microsoft didn't get to be number 1 by making their software open source.
A bit over two decades ago, I was just like you. Then, I hired a lawyer to teach me how open source licenses, and copyright licenses in general. I learned how open source actually works for a company working on ordinary profit-making principles.Good for you. But don't assume that because it worked for you it must work for everyone. Plenty of software developers have started open source and then realized they needed more control over their IP. Hey, maybe some of them were wrong about that. But you don't get to tell them what's best for their situation. The market will decide. If Nintendo is anything to go by then the market has already decided...
So you say. But it's not up to you to decide. If Nintendo don't really 'know', and could actually sell more games by letting people play free copies on their PCs, then it's their choice to not do so.
Microsoft didn't get to be number 1 by making their software open source.
Yes, DRM hurts sales. But if it wasn't for pirates (which on some platforms is the majority of users if you don't have some DRM) it wouldn't be necessary. Game cartridges are themselves a form of DRM, because they are expensive for the individual to copy and fake cartridges can be impounded. Game console makers got by for years because of this. However today - with modern PCs, emulators, and the net - users don't need to buy any hardware and distribution is essentially free. The only people making money from that are computer hardware vendors, ISPs, and Microsoft.
Right now I am using a PC running Linux Lite, which is practically the same as Windows but totally free. How many others are doing so? How much are the authors making from it? All Linuxes combined currently have 2% of the desktop market.And Linux has 100% of the HPC market, and a major piece of the server market (at least twice the next competitor, which is not Windows).
It's all about the money. You know it, I know it, we all know that money makes the world go around. That's how the system works and anything that threatens it threatens our way of life. Long term? In the long run we are all dead. Who cares about after that? Capitalists don't care at all. And we are all capitalists - we wouldn't be part of the system if we weren't.
By the time the latest games got shipped to New Zealand everybody already had a free copy.
Convenience and competition ALWAYS beats piracy - don't look further then the PC game market.
Attached is a study about piracy(pretty old and lonely, 2015), for some details.
I couldn't immediately find a clip of another Jobs' explanation that if you want a song and download a few copies off Napster or Limewire and check them to see which are mislabeled, which are poor quality etc instead of paying $1 to buy the song on iTunes then "you're working for less than minimum wage".
QuoteI couldn't immediately find a clip of another Jobs' explanation that if you want a song and download a few copies off Napster or Limewire and check them to see which are mislabeled, which are poor quality etc instead of paying $1 to buy the song on iTunes then "you're working for less than minimum wage".
Currently I purchase books for 99p off Amazon and the first thing I do is run them through Calibre to remove the DRM. I don't share them with anybody, but if I couldn't remove the DRM I wouldn't buy them, and these are the equivalent of the $1 tracks Jobs was on about.
A $1 track is just $1, but you don't have just one track. You tend to have loads of them, so you're really talking loads of $. And the 'middle way' that Jobs speaks of is to lock all that up so you can only access them with some specific software on specific kit at the whim of a mega-corp who doesn't see you as an individual. That's just bonkers.
Did you understood what I've written:Convenience and competition ALWAYS beats piracy - don't look further then the PC game market.
Attached is a study about piracy(pretty old and lonely, 2015), for some details.
https://www.youtube.com/watch?v=r9z5FFnAaZ4 (https://www.youtube.com/watch?v=r9z5FFnAaZ4)
I couldn't immediately find a clip of another Jobs' explanation that if you want a song and download a few copies off Napster or Limewire and check them to see which are mislabeled, which are poor quality etc instead of paying $1 to buy the song on iTunes then "you're working for less than minimum wage".
Can you download kindle books from Amazon to a PC?
Did you understood what I've written:Convenience and competition ALWAYS beats piracy - don't look further then the PC game market.
Attached is a study about piracy(pretty old and lonely, 2015), for some details.
https://www.youtube.com/watch?v=r9z5FFnAaZ4 (https://www.youtube.com/watch?v=r9z5FFnAaZ4)
I couldn't immediately find a clip of another Jobs' explanation that if you want a song and download a few copies off Napster or Limewire and check them to see which are mislabeled, which are poor quality etc instead of paying $1 to buy the song on iTunes then "you're working for less than minimum wage".
Convenience and competition ALWAYS beats piracy
I'm not interested what is Steve Jobs opinion on piracy, since it had his own business to peddle.
The document I've linked is a properly done research, not a personal and biased opinion.
...Sorry for the misunderstanding, but I don't consider Apple competition friendly; convenience, yes - competition, no. Compare Itunes and Steam policies as example.
Why are you attacking me when I'm agreeing with you?
Also, I'd take the real-world vast success of the iTunes store (and others) as far stronger evidence than a hundred academic studies.
In a very real sense, convenience is a root cause for code bloat, too.Indeed. Throwing in a library or doing it the dirty inefficient way is usually easier so that's what people do.
Using third party libs comes now with an additional risk:
Yep i might not be a fan of Apple but they have truly revolutionized the music industry.My point of view is a little different: when itunes come out, the music industry business model already was under pressure from the digital formats(and not only P2P); without Apple opportunistic grab, the music situation now would be similar of what is now on the video streaming market. And to remind you about how customer friendly is Apple: DRM had been removed when Amazon started to sell MP3's, lossless option had been added when Bandcamp(IIRC) started doing. Presenting Apple as customer focused company is hypocrisy.
It has taken a lot of legal work in the background to even allow Apple to implement its iTunes music selling business model. The record labels really really didn't want anything other than selling CDs in physical stores. But eventually Apple made it happen so that they could make music as conveniently accessible as possible at a good price. This has dealt one of the biggest blows to music piracy ever. For a lot of people it became more convenient to buy a song rather than pirate it from P2P networks. So they instead started buying music. All this was all part of the plan for there iPod ecosystem.
...
In a very real sense, convenience is a root cause for code bloat, too.
My point of view is a little different: when itunes come out, the music industry business model already was under pressure from the digital formats(and not only P2P); without Apple opportunistic grab, the music situation now would be similar of what is now on the video streaming market. And to remind you about how customer friendly is Apple: DRM had been removed when Amazon started to sell MP3's, lossless option had been added when Bandcamp(IIRC) started doing. Presenting Apple as customer focused company is hypocrisy.
Apple had the distribution in place for years; when the Amazon deal came thru, Amazon still had to build up the distribution. Apple had to flip a switch, once the Amazon deal was sealed, that's how he managed to beat Amazon with two weeks. But the most important part of the deal(from the labels POV) wasn't the DRM-free distribution, but the multiple storefronts.My point of view is a little different: when itunes come out, the music industry business model already was under pressure from the digital formats(and not only P2P); without Apple opportunistic grab, the music situation now would be similar of what is now on the video streaming market. And to remind you about how customer friendly is Apple: DRM had been removed when Amazon started to sell MP3's, lossless option had been added when Bandcamp(IIRC) started doing. Presenting Apple as customer focused company is hypocrisy.
Amazon started selling DRM-free MP3s from EMI and Universal on 25/9/2007.
Apple started selling DRM-free AAC songs from EMI on 10/9/2007.
Apple was first, and it was the labels that were the hold-up, not Apple. Apple had been doing the hard work trying to persuafe them to go DRM-free for years.
In a very real sense, convenience is a root cause for code bloat, too.Indeed. Throwing in a library or doing it the dirty inefficient way is usually easier so that's what people do.
As conclusion to our discussion, care to comment of the original DRM terms, especially the one which locks you in the Apple ecosystem(unlimited copies on Apple devices)?
Bloat exists because it's currently cheaper to leave piles of shit everywhere than clean it up.With 5GHz processors and many gigs of DRAM it doesnt matter as much these days.
When performance or user experience suffers in any way then it will change.
There's a massive performance and energy usage wall coming up on the hardware side of things that will put the focus back on software efficiency.
With 5GHz processors and many gigs of DRAM it doesnt matter as much these days.
In the distant past I remember scraping the last byte out of assembly language programs to use less memory and gain more speed.
This is a problem in all forms of writing. There is a famous quote attributed variously to Blaise Pascal, Mark Twain and Jane Austen among others.
"I don't have time to write you a short letter, so I am writing a long one."
It takes time, energy and thought to reduce code size. All of these are of limited availability in development.
A lot of people still pile in and start writing code instead of planning it.True.
If you fail to plan, you plan to fail.
This is a problem in all forms of writing. There is a famous quote attributed variously to Blaise Pascal, Mark Twain and Jane Austen among others.
"I don't have time to write you a short letter, so I am writing a long one."
It takes time, energy and thought to reduce code size. All of these are of limited availability in development.
A lot of people still pile in and start writing code instead of planning it.
If you fail to plan, you plan to fail.
Now if by planning you mean "architecturing", then sure. Sadly, this tends to be a forgotten notion in software these days, and if you talk about software architecture, you're likely to be considered a dinosaur stuck in the waterfall days.The problem is that the term SW and System architect are watered down to mineral water these days.
That's old thinking. Nowadays you architect a product by usingPythonJavaScript to gluegithubnode modules together. Easy peasy.
That's old thinking. Nowadays you architect a product by using Python to glue github modules together. Easy peasy.
System architect are watered down to mineral water these days.Do not know what your justification fot 5 years experience comes from.... Many years ago I had 2 years of Extensive experience working in a specific area and I still carry on decades after and get hired and get paid for the knowledge I got from that time period. Granted, I did not sit twiddling my thumbs during those 2 years, but I firmly believe 2 years is a huge experience absorbing resource. Even 1 year, because in my second year new knowledge influx dropped exponentially - simply there was nothing new to learn.
I now see architects with less than 5 yrs of work experience f*cking it up because they have no clue what they do except look good
I don't know. I'm next to (I refuse to say I'm with) a team of 300 developers and quite frankly they have enough time to do a proper job of solving all the problems. They just lack the motivation and skill to do so.
I saw them burn 9340 hours on something that took me 30 minutes to fix. No shit. That one was fucking buried as well so the management didn't look like morons.
I don't know. I'm next to (I refuse to say I'm with) a team of 300 developers and quite frankly they have enough time to do a proper job of solving all the problems. They just lack the motivation and skill to do so.
I saw them burn 9340 hours on something that took me 30 minutes to fix. No shit. That one was fucking buried as well so the management didn't look like morons.
Been there seen that :(
Do not know what your justification fot 5 years experience comes from.... Many years ago I had 2 years of Extensive experience working in a specific area and I still carry on decades after and get hired and get paid for the knowledge I got from that time period. Granted, I did not sit twiddling my thumbs during those 2 years, but I firmly believe 2 years is a huge experience absorbing resource. Even 1 year, because in my second year new knowledge influx dropped exponentially - simply there was nothing new to learn.Always exceptions and it also greatly varies with the domain and scope you are working on.
Sometimes code bloat IS the most efficient solution. "Good, fast, cheap: Pick any two." It totally depends upon the problem being solved. I bet the Apollo 13 astronauts didn't want to delay their return to wait for code jockies to slash a few cycles. On the other hand, they cared very much about optimizing current consumption because their fuel cells were gone.
Sometimes code bloat IS the most efficient solution. "Good, fast, cheap: Pick any two." It totally depends upon the problem being solved. I bet the Apollo 13 astronauts didn't want to delay their return to wait for code jockies to slash a few cycles. On the other hand, they cared very much about optimizing current consumption because their fuel cells were gone.
Not every problem is a PhD thesis. Sometimes "good enough" is exactly the right answer.
The code in the apollo computers did pick compromises where appropriate.
A lot of the firmware is in the form of bytecode executed using an interpreter to save program space while some sections of code are executed as raw machine code in order to run faster. The computers they had ware not very fast, so speed optimized code was needed in some cases.