and introduced that fancy new framework!Ah, you work with "web programmers" too :palm:.
Could have been solved if they had only done more unit testing, added more abstraction layers, and introduced that fancy new framework!
The usual way to fix such a bug is to just try doing the thing in a slightly different way and see if the issue disappears.
only crashing once a month is reliable
Hey crashing for 45 minutes every month is still 99.9% reliability. /sQuite! |O
The usual way to fix such a bug is to just try doing the thing in a slightly different way and see if the issue disappears.
How is that a problem? Often the first step to locating a bug is when it occurs and what changes will cause it to not occur. There are people who understand the lower layers, so communicate with them and they will help you with your issue. Isn't it the same thing true in hardware? Do you understand the design decisions in an ic that might be causing you problems?
Although I do agree there is too much complexity and abstractions quite a lot of software, especially things using "web technologies" (looking at you electron).
Oh yes, the PC (And Web) programmers view that RAM is infinite, and that only crashing once a month is reliable |O.
I also love that javascript (Spit!) does not let you do proper networking, in particular it seems you cannot listen to a UDP port, and multicast is 'interesting' very annoying when you have something like audio metering data that you want on a devices web page. Would generating arbitrary TCP packets really have been too much to ask?
Everyone should start by being required to write "Elite" on an 8 bit 1MHz 6502 with 48kB of RAM (Including the frame buffer).
Regards, Dan.
I guess it is a bit similar in hardware, but the layers underneath tend to be simpler than these massive software stacks that modern code runs onI think some hardware is starting to go that way, one example is control loops in SMPS controllers. The traditional analogue opamps and comparators are being replaced by digital control loops so you can't use traditional control theory anymore to model you power supply. The digital control loops aren't even properly documented so they just become some abstract hidden layer of hardware which you end up having to second guess. Power Integrations and Dialog Semiconductor are just two companies I can think of with ON Semi and maybe TI working on similar designs. To be fair Dialog Semi at least give you a simulator.
you will have the largest DDOS botnet in the world - just pay a few bucks to an ad network to deliver your code as ads on some high profile websites.That surely is what firewall rules exist to prevent?
I also love that javascript (Spit!) does not let you do proper networking, in particular it seems you cannot listen to a UDP port, and multicast is 'interesting' very annoying when you have something like audio metering data that you want on a devices web page. Would generating arbitrary TCP packets really have been too much to ask?
End of the day I don't really care what the JS does within its sandbox, but there are more appropriate places to filter network frames then at the edge of the sandbox, and those places are generally more usefully configurable.
I would get less annoyed by this sort of thing if I was not always being told by our web devs that web based 'apps' were the future, when they clearly cannot do basic things that I would expect of a general purpose machine.
Additionally, I find the term "programmer" to be somewhat of an ego massage, considering how many layers of software goop are underneath the layer that the "programmer" is copy/pasting templates into; unlike ASM, I very much doubt, if not almost GUARANTEE, that the "programmer" doesn't know, and never did know, the ASM instructions for the silicon upon which those many layers of software goop are piled.
Being on both sides of the fence the biggest problem is that both disciplines do not cooperate more with eachother. It is not or the HW eng or the SW eng fault if problems arise, stop blaming eachother and take a look in each others kitchen before starting bitchin.This or that, but the situation is shit.
SW has become massive the last decade. Many HW eng's have no clue how many locs are written and that it is virtually impossible to prevent all possible conditions, test all forks and all possible outcomes.
Very expensive software, tests the code and tries to find all unhandled situations and reports issues.
We once had five sw engineers work three months to get rid of all the possible issues that came from two months coding, it can be that complex.
than most modern code use open source third party stacks what is in those, or do you think those are perfect? Do they really behave as documented under all conditions. Can they handle gracefully all input parameters?
When I started 20+ years ago we had 3x more HW engineers than emb. SW engineers, we now have 20x more SW engineers than HW engineers, do you really think that is because the Company does not care? Or does it want all the latest and greatest "features" that clients can think of in their product?
Thanks for that elaborate analysis and insight.Being on both sides of the fence the biggest problem is that both disciplines do not cooperate more with eachother. It is not or the HW eng or the SW eng fault if problems arise, stop blaming eachother and take a look in each others kitchen before starting bitchin.This or that, but the situation is shit.
SW has become massive the last decade. Many HW eng's have no clue how many locs are written and that it is virtually impossible to prevent all possible conditions, test all forks and all possible outcomes.
Very expensive software, tests the code and tries to find all unhandled situations and reports issues.
We once had five sw engineers work three months to get rid of all the possible issues that came from two months coding, it can be that complex.
than most modern code use open source third party stacks what is in those, or do you think those are perfect? Do they really behave as documented under all conditions. Can they handle gracefully all input parameters?
When I started 20+ years ago we had 3x more HW engineers than emb. SW engineers, we now have 20x more SW engineers than HW engineers, do you really think that is because the Company does not care? Or does it want all the latest and greatest "features" that clients can think of in their product?
The usual way to fix such a bug is to just try doing the thing in a slightly different way and see if the issue disappears.
How is that a problem? Often the first step to locating a bug is when it occurs and what changes will cause it to not occur. There are people who understand the lower layers, so communicate with them and they will help you with your issue. Isn't it the same thing true in hardware? Do you understand the design decisions in an ic that might be causing you problems?
Although I do agree there is too much complexity and abstractions quite a lot of software, especially things using "web technologies" (looking at you electron).
Yes but i mean the nonsense bugs that you have no idea why they happen and the documentation doesn't say anything about it and the layers underneath are not in your control.
For example i had a case where i was using the .NET chart control in C# and came across a bug where the whole program would suddenly crash inside of the paint calls to the controls. The debugger doesn't really give you any details where it happened, just tells you what form window the event handler crashed on, but i knew it was the chart controls fault as it only happened when i did stuff to it. If it had the source code for that control then it would likely point you into it and where the crash happened. Normally the this chart control has some safeguards inside it that draw a big red X mark across the control when it gets too confused or misconfigured to draw something reasonable. Well they forgot to safeguard against this overflow that happens if your charts Y axis has values above about 1 million and you zoom into a small part of the chart. Okay so then il just wrap a error handler around it to catch it so that my program doesn't crash on the spot but just shows an error text or something. Well... the mechanism that calls events is deeper down in .NET so i can't just wrap a Try statement around it, nor can i wrap the code that's causing it in a Try statement since i don't have the source code to the chart control. Consulting the documentation says that it should display a red X when it encounters an error. Don't think anyone at microsoft would be willing to help me with the problem, but if enough people complain they will probably fix it in the next version of the .NET Framework.
I guess it is a bit similar in hardware, but the layers underneath tend to be simpler than these massive software stacks that modern code runs on. So its easier to properly document them and debug them. Tho when writing hardware drivers i do run into cases where something is just plain broken. For example on TIs OMAP processors the SD card controllers buffer full flag was always zero, it supposedly worked for DMA transfers but manually reading it doesn't. The fault was not yet documented in the PDFs but the solution was to instead wait for the buffer empty flag(that worked fine) and fill the buffer with how ever many bytes the buffer is supposed to be able to hold.
The problem Is not software itself, but the ship barely functioning maybe beta (sometimes alpha) builds to the customers using them as guinea pigs and then patch it later as bug reports arrives because hiring a bunch of people to test shit and find bugs is much more expensive than a half assed PR blogpost when shit inevitably hits the fan...
the biggest difference between HW and SW is that releasing a software patch is free meanwhile releasing a hardware patch (recalling a faulty product) is insanely expensive (just ask volkswagen), do you want peoples shipping quality software start imposing heavy fines and taxes on patchs and bugfix releases and see how the whole industry will focus on quality
(a bunch of insulting BS)
The usual way to fix such a bug is to just try doing the thing in a slightly different way and see if the issue disappears.
How is that a problem? Often the first step to locating a bug is when it occurs and what changes will cause it to not occur. There are people who understand the lower layers, so communicate with them and they will help you with your issue. Isn't it the same thing true in hardware? Do you understand the design decisions in an ic that might be causing you problems?
Although I do agree there is too much complexity and abstractions quite a lot of software, especially things using "web technologies" (looking at you electron).
Yes but i mean the nonsense bugs that you have no idea why they happen and the documentation doesn't say anything about it and the layers underneath are not in your control.
For example i had a case where i was using the .NET chart control in C# and came across a bug where the whole program would suddenly crash inside of the paint calls to the controls. The debugger doesn't really give you any details where it happened, just tells you what form window the event handler crashed on, but i knew it was the chart controls fault as it only happened when i did stuff to it. If it had the source code for that control then it would likely point you into it and where the crash happened. Normally the this chart control has some safeguards inside it that draw a big red X mark across the control when it gets too confused or misconfigured to draw something reasonable. Well they forgot to safeguard against this overflow that happens if your charts Y axis has values above about 1 million and you zoom into a small part of the chart. Okay so then il just wrap a error handler around it to catch it so that my program doesn't crash on the spot but just shows an error text or something. Well... the mechanism that calls events is deeper down in .NET so i can't just wrap a Try statement around it, nor can i wrap the code that's causing it in a Try statement since i don't have the source code to the chart control. Consulting the documentation says that it should display a red X when it encounters an error. Don't think anyone at microsoft would be willing to help me with the problem, but if enough people complain they will probably fix it in the next version of the .NET Framework.
I guess it is a bit similar in hardware, but the layers underneath tend to be simpler than these massive software stacks that modern code runs on. So its easier to properly document them and debug them. Tho when writing hardware drivers i do run into cases where something is just plain broken. For example on TIs OMAP processors the SD card controllers buffer full flag was always zero, it supposedly worked for DMA transfers but manually reading it doesn't. The fault was not yet documented in the PDFs but the solution was to instead wait for the buffer empty flag(that worked fine) and fill the buffer with how ever many bytes the buffer is supposed to be able to hold.
The problem Is not software itself, but the ship barely functioning maybe beta (sometimes alpha) builds to the customers using them as guinea pigs and then patch it later as bug reports arrives because hiring a bunch of people to test shit and find bugs is much more expensive than a half assed PR blogpost when shit inevitably hits the fan...
the biggest difference between HW and SW is that releasing a software patch is free meanwhile releasing a hardware patch (recalling a faulty product) is insanely expensive (just ask volkswagen), do you want peoples shipping quality software start imposing heavy fines and taxes on patchs and bugfix releases and see how the whole industry will focus on quality
The great trouble that has the software is the quantity of lines of code, many are useless. Adding the programming paradigm POO and the efficient like JAVA for print a line by screen ,it needs four lines of source. Now , you imagine a complex system like SAP. Who is the brave that will debugger for seeking the fail and correct it?
Economically, debugging is expensive on money and time, more easier is doing a new function and patch the error when it is produced, but against the size the program grow and it will spend more resources.
An other thing , patching is more cheaper than other options but never free, less that the corporations pay the programmers with a rice bowls.
Sorry, I didn't attack against you, the opposite. The problem of overly complex SW creeping everywhere is just irritating. Reality, but still really irritating. Kind of like someone sell you a chainsaw with feature creep, but when you start to work with it you notice that the motor only runs 5 minutes. That would be essentially a worthless equipment. That said I have had two chainsaws, which have 0% of software, one had major design flaw in carburetor (german) and another had oilpump that did work only when rotated backwards (swedish). Needless to say they were the bottom of the lineup models, as I need one once in a year, still I would assume that chainsaw do have motor and pumps that runs. ??? ::)Thanks for that elaborate analysis and insight.Being on both sides of the fence the biggest problem is that both disciplines do not cooperate more with eachother. It is not or the HW eng or the SW eng fault if problems arise, stop blaming eachother and take a look in each others kitchen before starting bitchin.This or that, but the situation is shit.
SW has become massive the last decade. Many HW eng's have no clue how many locs are written and that it is virtually impossible to prevent all possible conditions, test all forks and all possible outcomes.
Very expensive software, tests the code and tries to find all unhandled situations and reports issues.
We once had five sw engineers work three months to get rid of all the possible issues that came from two months coding, it can be that complex.
than most modern code use open source third party stacks what is in those, or do you think those are perfect? Do they really behave as documented under all conditions. Can they handle gracefully all input parameters?
When I started 20+ years ago we had 3x more HW engineers than emb. SW engineers, we now have 20x more SW engineers than HW engineers, do you really think that is because the Company does not care? Or does it want all the latest and greatest "features" that clients can think of in their product?
Hardware also fails
https://www.eevblog.com/forum/projects/lots-of-failing-outdoor-led-lamps-very-long-overhead-mains-cable/ (https://www.eevblog.com/forum/projects/lots-of-failing-outdoor-led-lamps-very-long-overhead-mains-cable/)
and people are demanding quick release of apps and features.
Takeaway everyones smartphone, kill Linux, OSX and Windows and lets all stop using computers because they have bugs.
Lets face it life is not perfect and if society does not tune down on speed of progress this will only become worse also on the hardware side where the customer will be the tester.
Oh yes, the PC (And Web) programmers view that RAM is infinite, and that only crashing once a month is reliable |O.
I also love that javascript (Spit!) does not let you do proper networking, in particular it seems you cannot listen to a UDP port, and multicast is 'interesting' very annoying when you have something like audio metering data that you want on a devices web page. Would generating arbitrary TCP packets really have been too much to ask?
Everyone should start by being required to write "Elite" on an 8 bit 1MHz 6502 with 48kB of RAM (Including the frame buffer).
Regards, Dan.
and introduced that fancy new framework!Ah, you work with "web programmers" too :palm:.
Funny how the frigging web site on an embedded device ends up using more code space then the program that does whatever the thing was really intended to do (And how when you point out that the result needs an I7 with modern gamer graphics to render properly, they cannot explain where the time goes).
Regards, Dan.
... well that's what I think. Now read this:
https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/ (https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/)
Get ready for self-crashing cars & self-employing robots. |O
Yeah, you also really notice it on mobile phones. They become slower every year because they add more and more abstraction layers and bloated code in between. I'd say Android is a prime example of how to not build mobile software.
Being on both sides of the fence the biggest problem is that both disciplines do not cooperate more with eachother. It is not or the HW eng or the SW eng fault if problems arise, stop blaming eachother and take a look in each others kitchen before starting bitchin...This! Was the latest Spectre vulnerability a hardware or a software f-up? Not passing judgment, asking. With modern devops, micro services, etc. complexity can be handled nicely in the web world actually (few people bother to learn though).
... well that's what I think. Now read this:
https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/ (https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/)
Get ready for self-crashing cars & self-employing robots. |O
would you consider ST generated HAL based code a contributor to future disasters?
Software is stupid
Software is made overly complex over the years, and this is done because features. Features, that are not necessary, which are the bane of reliability.
Part of the problem is software writers insisting on continuing to use programming languages which went past their sell-by date even before Windows XP was thought of. Yet, at the same time they berate us for using 'old and insecure' software! The hypocrisy of this beggars belief. :-//
http://iwrconsultancy.co.uk/blog/badlanguage (http://iwrconsultancy.co.uk/blog/badlanguage)
Spec == Init /\ [][Tick]_<<clock>> /\ WF_<<clock>>(Tick)
Oh yes, the PC (And Web) programmers view that RAM is infinite, and that only crashing once a month is reliable |O.
You try created a billing system for an energy retailer. You receive requirements through about 3 different channels to about 3 different people in your team. The customer doesn't even understand their current business processes nor what they want their new ones to be. You fight valiantly trying to get to the details out of them, trying desperately to explain when requirements actually conflict, but they run around you and the code gets done by a junior engineer anyway.
Part of the problem is software writers insisting on continuing to use programming languages which went past their sell-by date even before Windows XP was thought of. Yet, at the same time they berate us for using 'old and insecure' software! The hypocrisy of this beggars belief. :-//
http://iwrconsultancy.co.uk/blog/badlanguage (http://iwrconsultancy.co.uk/blog/badlanguage)
I was referring more to security issues, but yes, any code can contain bugs. It's just that the security flaws have more serious consequences then the odd bluescreen or two.
You try created a billing system for an energy retailer. You receive requirements through about 3 different channels to about 3 different people in your team. The customer doesn't even understand their current business processes nor what they want their new ones to be. You fight valiantly trying to get to the details out of them, trying desperately to explain when requirements actually conflict, but they run around you and the code gets done by a junior engineer anyway.
^^ This! It's the same nearly everywhere I have worked.
Even if I could write a formal spec for the design, different product managers submit conflicting requirements. None of them would understand the spec. Even if the spec is written in plain english, getting them to read, understand and sign off is hard work. Regardless of what is agree, the requirements will change multiple time through development.
I then find the engineers designated TBA1 and TBA2 are never employed, so the dev team has to develop the product with less members than planned, who also spend half the time supporting the previous project that over ran. If we refused to engage the project, it would be simply passed to another department who hires a bunch of cheap coders fresh from college, or more likely outsource to someone who does the same, except charges 5 times as much for a worse result.
Saying that "if only programmers used a better programming language, products would be so much better" is kinda like saying "if only all countries used the same currency, then no one would be poor and go hungry".
Enforced boundary checks? Great, so your software crashes with a boundary check message instead of overrunning the buffer. An improvement, but still ... your program crashed.
With hardware the product owners and managers understand that changing it after it has been built is very expensive. They still don't get it that in the meantime it is just as costly to change software esp when it involves redesign, architecture change because suddenly some product owner woke up in the middle of the night with this great new selling feature that all customers want, one problem it was not foreseen at design time, no-one knows how long it will take to implement without a good analysis and still "stop everything" this new feature has to be made now and done in two weeks.When you start looking at the details of what you're actually supposed to build with the tools you have, a completely new forest little things blocks the road. We fall for this every time.
There are new ways of working in SW country for instance we work Scaled Agile framework. We plan for 6 sprints, 12 weeks, but after 4 weeks the planning is already completely obsolete. That is how bad a SW environment can be, very dynamic.
Buffer overrun issues at 99% laziness or sloppiness.
You try created a billing system for an energy retailer. You receive requirements through about 3 different channels to about 3 different people in your team. The customer doesn't even understand their current business processes nor what they want their new ones to be. You fight valiantly trying to get to the details out of them, trying desperately to explain when requirements actually conflict, but they run around you and the code gets done by a junior engineer anyway.
^^ This! It's the same nearly everywhere I have worked.
Even if I could write a formal spec for the design, different product managers submit conflicting requirements. None of them would understand the spec. Even if the spec is written in plain english, getting them to read, understand and sign off is hard work. Regardless of what is agree, the requirements will change multiple time through development.
I then find the engineers designated TBA1 and TBA2 are never employed, so the dev team has to develop the product with less members than planned, who also spend half the time supporting the previous project that over ran. If we refused to engage the project, it would be simply passed to another department who hires a bunch of cheap coders fresh from college, or more likely outsource to someone who does the same, except charges 5 times as much for a worse result.
Saying that "if only programmers used a better programming language, products would be so much better" is kinda like saying "if only all countries used the same currency, then no one would be poor and go hungry".
This has absolutely nothing to do with software it is the same thing in hardware, most of the times, the client has only a vague idea of what it want, it’s your job as a engineer to translate the hand-waivy explanations in specifications that can be implemented cleanly, after all if they had the expertise to define a detailed spec they could as well implement the thing themselves
Case and point, the guy next to me is working with a worldwide famous high end car maker and he is modelling components they had made to specs by a big, world famous automotive power electronics manufacturer the most time has been spent tritino to understand the specs of the object, since the carmaker (client) had absolutely no idea whatsoever
I think that the problem is entirely due to the general business model of the software companies
Software is made overly complex over the years, and this is done because features. Features, that are not necessary, which are the bane of reliability. Do you really need to be able to do all those things in software? Do you really need all the APIs the wrappers for the API, the abstraction layer for the wrapper? No. But someone asked for a feature, and it "needs to be done". Do you need to be able to start yourNo wonder cost of car today is 72% electronics related, according to reputable auto industry research papers.FiatJeep from your iPhone? No. But they did it anyway, and they did it badly. Are you suprised? It was probably built by a team of 50 engineers, all of them focusing on a small task, connecting the CAN line of the device to the ECU, connecting the ECU to the bluetooth module, running some legacy code written by interns.
I have an error, which would require me to make changes in the linux kernel. I'm not even kidding. There is an I2C device, that I'm using, it has a software reset function, that I need to use for normal operation. When the software reset happens, it doesnt send an acknowledgement to the host, it resets, not finishing the I2C transaction normally. The hosts generates an error message.
You know, what the device is? An LED driver.
So because some weird bullshit requirement to be able to blink LEDs, I would need to change stuff in the linux kernel to be able to suppress error messages. Instead of fixing the code (and probably breaking someone else's code) how about getting sane requirements?
There is a system with 4 requirements. Like this button does this, that button does that, this switch is for safety, it stops movement. Each function has a piece of code, which is simple and works fine. The 4 piece of code has 6 ways of effecting each other. Just draw a graph for it. When you add a 5th option, the number of these connections grows to 10. Something which we take granted, like a TPC-IP stack has billions of these connections, possibilities, that one part of the code messing up another part. Programmers can't keep track of it? I'm not surprised.
you will have the largest DDOS botnet in the world - just pay a few bucks to an ad network to deliver your code as ads on some high profile websites.That surely is what firewall rules exist to prevent?
End of the day I don't really care what the JS does within its sandbox, but there are more appropriate places to filter network frames then at the edge of the sandbox, and those places are generally more usefully configurable.
I would get less annoyed by this sort of thing if I was not always being told by our web devs that web based 'apps' were the future, when they clearly cannot do basic things that I would expect of a general purpose machine.
Javascript in a browser has no business generating arbitrary network packets. If you need that, your server layer should take care of it and deliver you the result e.g. over a websocket.
No wonder cost of car today is 72% electronics related, according to reputable auto industry research papers.And that is the part, which keeps on working. At least I never hear that in the service, they need to replace the ECU and the cabling and the software after 100Km, because it wore out. It is always some mechanical fluidy plastic thingy which rotates and breaks because you dear using it.
Nah. Moisture here and there, lots ofNo wonder cost of car today is 72% electronics related, according to reputable auto industry research papers.And that is the part, which keeps on working. At least I never hear that in the service, they need to replace the ECU and the cabling and the software after 100Km, because it wore out. It is always some mechanical fluidy plastic thingy which rotates and breaks because you dear using it.
Wherefore, the good programmers have to leave an open design for modificating the source code without have to rewrite it all.
Unclear and ever changing specifications are indeed the worst in programming.
I hate Javascript with a passion. It's a rubbish language that was never intended for it's purpose, based on out of date techniques that grew warts, heads, arms and legs in all different directions and has only recently be 'tamed' in any way at all. Mostly due to VERY heavy compatibility frameworks behind things like Angular and React.
I hate Javascript with a passion. It's a rubbish language that was never intended for it's purpose, based on out of date techniques that grew warts, heads, arms and legs in all different directions and has only recently be 'tamed' in any way at all. Mostly due to VERY heavy compatibility frameworks behind things like Angular and React.
You're talking complete bollocks. And again, Javascript != DOM.
You're talking complete bollocks. And again, Javascript != DOM.
Well thank you for that elaborated correction to my opinion. I am perfectly aware of what DOM is thank you and I'm not sure where I said Javascript == DOM.
(*) Most of which confuse the DOM with JS, and/or can't grasp why JS is no C nor JAVA: different scoping rules, closures, prototypal inheritance, no classes, asynchronous/evented execution, etc.
BSON fixes some of that shit but just use protobufs or something instead.
(*) Most of which confuse the DOM with JS, and/or can't grasp why JS is no C nor JAVA: different scoping rules, closures, prototypal inheritance, no classes, asynchronous/evented execution, etc.
We can disagree on the language. I worked with developing Javascripts frontends for several years in the past and I still hate it.
It's really scheme or closure with scheme/closure crossed out and Java written in as a marketing ploy. Until the ECMA standards came in it wasn't really a language at all, it was a collection of hacks each one different in each browser. In part it STILL is. It's just that we have frameworks today that hide this and work around the browser implementation oddities.
It's dynamic typing is fairly retarded and sometimes extremely irritating. The scope is bonkers. Everything seems to be a function that takes a function as an argument and returns a function that takes a function as an argument. Recursion and callback-tastic. It's object orientation is just broken.
The only good thing that came out of Javascript was JSON, which thankfully shut the XML fan boys up enough that we managed to get away from slow, verbose, cumbersome DOM structures and immensely inefficient parsing.
Dynamic typing is fine, automatic type conversion (casting) not so much, you've got to be careful and know what you're doing or if not avoid it altogether, which is easy to do.If it's only easy to avoid, then with enough programmers and enough LOCs it won't always be avoided in the end. That's why large companies have been forcing statically typed variants on their programmers. Not possible is better than easy to avoid.
Well yes perhaps banks can't change, but paypal is 100% xml-free.
But static typing only saves you of a very tiny % of program errors and can generate others of its own (e.g. could not load this or that DLL version), so I gladly prefer to be relieved of the slavery of having to choose and type every fucking var type beforehand for not much benefit if any. And I'm not saying it's not ok for C and systems programming, I'm just saying that for apps and the web it's not a must, it's more a needless hassle.
Unfortunately Javascript has migrated to server side. Dynamic typing with automatic casting is one of those features which just have an inordinately high chance of creating exploitable bugs, doesn't belong on anything internet facing which deals with sensitive data.
And... as to JS <> DOM. Can you explain to me what exactly you can do with JS in a browser that doesn't involve the DOM? It's a bit daft saying that JS is fully 100% compatible across all browsers, it's the browsers that have different DOM and Window and API implementations etc. etc. Without those things JS is pointless and mute.
But static typing only saves you of a very tiny % of program errors and can generate others of its own (e.g. could not load this or that DLL version), so I gladly prefer to be relieved of the slavery of having to choose and type every fucking var type beforehand for not much benefit if any. And I'm not saying it's not ok for C and systems programming, I'm just saying that for apps and the web it's not a must, it's more a needless hassle.
Ohh yes, the wonders of auto-completion: when in doubt hit ctrl-space.
Indeed. Cut and paste and Ctrl+tab your way to a finished product.
Java is really just glue for all the nice canned libraries out there that do all the hard stuff. When knocking anything Java out I have to write very little code. I mean hardly anything. Data import, meh import smooks and cut and paste for 20 minutes and you've got CSVs, XML being streamed into hibernate ...
Nice try, sounds good, seems plausible! But no. And again, Paypal... for example (and a zillion other online businesses).
It's all tools for the job. You don't put an buck converter into a 12V circuit to run a 3.3V MCU that pulls 20mA. Yet some people hate linear LDO regs and vice versa.Actually, there do exist micropower buck converters for where even that tiny amount of power matters.
I've got 80 feet of computer books to go with my attitude. And, yes, I've read them, except for things like the X11 and Motif manuals which are just for reference.
I work in telecom now, and it's incredible how redundant and robust stuff is designed - especially the older stuff like DMS switches.
Java is really just glue for all the nice canned libraries out there that do all the hard stuff. When knocking anything Java out I have to write very little code. I mean hardly anything. Data import, meh import smooks and cut and paste for 20 minutes and you've got CSVs, XML being streamed into hibernate ...
What about PayPal? They use Java server side.
Meanwhile Facebook, Microsoft and Google all have their own version of typed Javascript (with a few more out there).
One of the main problems in the industry is lack of talent.
I don't mean that sarcastically either. Literally lack of talent. In the UK schools, primary and high school have completely trashed the reputation of "IT" as a career by reducing it down to "ICT Skills" which is then implemented by teaching kids how to use Microsoft Office. 99% of kid immediately go "Sod that I'm not doing a career in IT! No way!"
After 20 or so years of this the number of IT graduates coming out of universities has diminished while the demand for software continues to increase. The net result is that companies I have worked for now recruit via transfer courses to take people with any degree (or no degree at all), put them through a 6-12 week "Academy" and plonk them into a programming or analyst career.
Worse is that the senior engineers are very often not given enough time or resources to continue their training.
So dodgy, miss guided, naive code becomes the norm and people like myself simply don't have the time to review, rewrite and education to correct it.
Spaghetti thing makers are a prime example.
It will be another ten years before I'd even look at a spaghetti thing maker.
Spaghetti thing makers are a prime example.
https://www.paypal-engineering.com/2013/11/22/node-js-at-paypal/ (https://www.paypal-engineering.com/2013/11/22/node-js-at-paypal/)That predates the widespread adoption of typed Javascript, more importantly though they seem to have moved back (https://www.paypal-engineering.com/2016/05/11/squbs-a-new-reactive-way-for-paypal-to-build-applications/).
No wonder cost of car today is 72% electronics related, according to reputable auto industry research papers.And that is the part, which keeps on working. At least I never hear that in the service, they need to replace the ECU and the cabling and the software after 100Km, because it wore out. It is always some mechanical fluidy plastic thingy which rotates and breaks because you dear using it.
Then the managers see the prototype demonstration and think that it is ready for production, it works so the rest is just a month work or something like that.
Just met a case of this. Set up a high value banking transaction which involved entering a 20-digit number and other details with great care. Was then told I had to wait for a mobile phone message for two-factor auuthentication. Only, this is a new feature and they haven't told users to give them a mobile number yet.
So, went to the personal details page and tried to enter a number. Was told that the number could not be accepted because the existing number was {blank}. Well, yes of course it is because..
This is a typical syndrome with programmers. They simply have no real world commonsense. They can write 20,000 lines of utter gibberish, but can't think through even a simple real-world action. If they had to tie their own shoelaces they'd do so first and then try to put the shoes on.
This is a typical syndrome with programmers. They simply have no real world commonsense. They can write 20,000 lines of utter gibberish, but can't think through even a simple real-world action. If they had to tie their own shoelaces they'd do so first and then try to put the shoes on. |O |O |O |O |O |O |O |O
> You can grab your latest creation, take it home and show it off to your family; "I just made this!" beaming with delight.
Aaagghghghgh thank fuck I work in an office graced with more than one copy of the dragon book now.
throw the damn thing out and replace it with a replicated distributed in-memory key-value store. Much more efficient :)
Hardware:
> Subject to the unchanging laws of physics, constants that are rock solid and can be measured, logged and relied upon ALWAYS.
> You can see what's broken through symptoms, dry joints, burnt out components etc.
> You can grab your latest creation, take it home and show it off to your family; "I just made this!" beaming with delight.
> You can grab your latest creation, take it home and show it off to your family; "I just made this!" beaming with delight.
Sometimes goes a bit wrong though: https://en.wikipedia.org/wiki/TempleOSAt least teh programming language which were used to create the TempleOS is spot on as HolyC. The holy language of the ceremonial programming tasks, the almighty software geekery. >:D
Consumer software appears to work like this (Android, Windows, I'm looking at YOU):That no longer seems to be the case for desktop PCs. I'm sitting here, quite happily using a 12 year old PC, with a modern OS. The only upgrades were the RAM (1GB to 3GB) and a solid state hard drive. Now, go back to 2006 and see if you could use a 1994 era PC: even with the RAM and hard drive upgraded, it still wouldn't run a modern OS and browser, without falling over.
• "Look at this puny truck we have, we can only carry 1 ton of sand on this; we need a bigger truck"
(Buys bigger 'truck' [CPU, RAM etc])
• "Look at this GIGANTIC new truck! I'm so glad we bought this, it's rated at 10 tons... but I'm sure we can squeeze 15 out of it, and if we switch to a cheaper fuel grade we can get more out of our purchase."
So, you have the latest hardware, it's not fast enough to keep up with your bloated mess of an OS that's about to ship, so you develop faster hardware instead of optimising your shiny new blob of tangled binary spaghetti, rinse and repeat, and it had the convenience of driving totally unnecessary hardware sales.
Schmucks.
JSON is shit. Bad numeric types, no reliable schema implementation, terrible Unicode problems. Nothing to like. It’s 2017’s CSV that fell out of the JavaScript crack pipe.
BSON fixes some of that shit but just use protobufs or something instead.
XML is one of those things that will outlive us all and be determined that it was a good idea after all but not until, like all human progress, many things have died first.
The JSON syntax specified by this specification and by RFC 8259 are intended to be identical.
When there's a syntax error it's always better to err than to "try to fix it" automatically.
That depends on who you ask:
(https://www.eevblog.com/forum/chat/software-is-stupid-programmers-are-overpaid-poets/?action=dlattach;attach=390712;image)
But my point was not that JSON is better than XML. Just that sometimes there is a good reason for using JSON rather than the space efficient raw binary encoding. XML also does everything that JSON can, it just looks different.
An object whose names are all unique is interoperable in the sense that all software implementations receiving that object will agree on the name-value mappings. When the names within an object are not unique, the behavior of software that receives such an object is unpredictable. Many implementations report the last name/value pair only. Other implementations report an error or fail to parse the object, and some implementations report all of the name/value pairs, including duplicates.
The JSON syntax does not impose any restrictions on the strings used as names, does not require that name strings be unique, and does not assign any significance to the ordering of name/value pairs. These are all semantic considerations that may be defined by JSON processors or in specifications defining specific uses of JSON for data interchange.YMMV!
Quote from: On 6/7/2013 9:59 AM, Allen Wirfs-Brock wrote:We should not be invalidating data or programs. We should only beQuote from: On Jun 7, 2013, at 9:46 AM, Douglas Crockford wrote:I think it is more important to preserve the validity of existing archival datasets then it is to preserve the validity of existing parsers that throw on duplicate keys.Quote from: On 6/6/2013 8:42 AM, Allen Wirfs-Brock wrote:Given that duplicate names have historically been valid, that some people use them, and that the standard ECMAScript JSON parser accepts them I don't see why allowing a parser to reject duplicate names is helpful.Because some parsers do so, and have for years. JSON.parse is not the only JSON parser. Our purpose here is not to attempt to fix this thing, because fixing it will break it. There were unfortunately ambiguities in the RFC. If we can, we should remove the ambiguities, but we must do so without changing the meaning of what JSON is.
explaining better. No breakage is acceptable. No breakage.
json mailing list
json@ietf.org
https://www.ietf.org/mailman/listinfo/json (https://www.ietf.org/mailman/listinfo/json)
I fail to see the difference.
You don't see ICs sending XML or JSON over SPI.
If the knowns are known, binary wins on the software side, impossible without a bitwise header for all client languages needed. If the knowns are unknown then a flexible format like JSON if typing is not a problem or it's 99% strings, especially for display. XML is for when the format has to be flexible and precise/prescriptive.
Don't build the Forth Bridge to cross every puddle.
The difference is that the last line of JSON must NOT have a comma. All others must. This makes it hard to write a routine to modify JSON. It means that if you want to remove an item you (a) have to check if it is the last item in its section, and (b) if it is, then go back up the chain and modify the previous one.
In a well designed system, the required syntax for a given item should not depend on its position in the list.
Well, of course you don't do that in the general case, any more than ICs sending C source code over SPI.
Poor INI file, feeling all excluded. :P
The difference is that the last line of JSON must NOT have a comma. All others must. This makes it hard to write a routine to modify JSON. It means that if you want to remove an item you (a) have to check if it is the last item in its section, and (b) if it is, then go back up the chain and modify the previous one.
{
"colour": "red",
"age": 45,
"weight": 71.08
}
$.$sift(function($value, $key) {$key != "weight"})
{
"colour": "red",
"age": 45
}
Agree though that the whole thing is a stupid situation, and if it had been done right from the outset then thee would have been special keyboard keys for data delimiters, characters that are not used in normal typing.
00 nul 01 soh 02 stx 03 etx 04 eot 05 enq 06 ack 07 bel
08 bs 09 ht 0a nl 0b vt 0c np 0d cr 0e so 0f si
10 dle 11 dc1 12 dc2 13 dc3 14 dc4 15 nak 16 syn 17 etb
18 can 19 em 1a sub 1b esc 1c fs 1d gs 1e rs 1f us
I disagree, control chars are a thing of the past, invisibles are a pain in the butt, human read-ability is very a good thing. JSON is what it is: *JavaScript* Object Notation, it suits JS perfectly, for any other languages YMMV.
... well that's what I think. Now read this:
https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/ (https://www.theatlantic.com/technology/archive/2017/09/saving-the-world-from-code/540393/)
Get ready for self-crashing cars & self-employing robots. |O
This allows you to segfault shitty remote implementations.
This allows you to segfault shitty remote implementations.
Thou shalt not segfault in stock exchanges! Segfaults cost millions a minute. Failing over can take a few minutes. Usually we would just kick the broker connection out on error, regardless if it was his error or ours.
I estimate around a least a trillion dollars has passed through my code. I have also, by proxy, been in the news, which turned out NOT to be my fault, but I was in the firing line via project ownership.
I'm sure the surveillance filter in work here just turned my way.
control chars are a thing of the past
...
human read-ability is very a good thing.
Which obviously doesn't include content description and database querying
There are lots of document DBs and key/value DBs and nosql DBs out there with a pure/plain text interface.
Everyone should start by being required to write "Elite" on an 8 bit 1MHz 6502 with 48kB of RAM (Including the frame buffer).
Regards, Dan.
Everyone should start by being required to write "Elite" on an 8 bit 1MHz 6502 with 48kB of RAM (Including the frame buffer).
Make that 32K ;)
Well, of course you don't do that in the general case, any more than ICs sending C source code over SPI.
That's not really the same thing. A better analogous data structure would be a serialized C struct, which they do indeed often send over I2c/SPI.
Everyone should start by being required to write "Elite" on an 8 bit 1MHz 6502 with 48kB of RAM (Including the frame buffer).
Regards, Dan.
Make that 32K ;)
I disagree, control chars are a thing of the past, invisibles are a pain in the butt, human read-ability is very a good thing. JSON is what it is: *JavaScript* Object Notation, it suits JS perfectly, for any other languages YMMV.
I disagree, control chars are a thing of the past, invisibles are a pain in the butt, human read-ability is very a good thing. JSON is what it is: *JavaScript* Object Notation, it suits JS perfectly, for any other languages YMMV.Thing of the past is a bit bold. Shells still rely on them.
Answer, nobody. So, why do we have to address computers in mangled grammar?
I disagree, control chars are a thing of the past, invisibles are a pain in the butt, human read-ability is very a good thing. JSON is what it is: *JavaScript* Object Notation, it suits JS perfectly, for any other languages YMMV.Thing of the past is a bit bold. Shells still rely on them.
And they're nice when interfacing with a little embedded device over uart. It's lightweight and a human is still able to decode the datastream.
Perhaps we should introduce font sensitivity into programming, where the Courier word has a different meaning from the Arial version. :palm:
also, who starts a sentence with a lowercase letter because the word can only be written that way, or walks into an office and announces. "CAPITALGEE-ood morning, CAPITALEMM-argaret!"Because you want computers to do what you say, exactly, 100% of the time. Human communication doesn't require strict grammar, because it's usually perfectly ok to go "What did you just say?" and ask for clarification. If computers did that 1000 times a second, just because the programmer used ambiguous wording, well, it wouldn't be an improvement.
Answer, nobody. So, why do we have to address computers in mangled grammar?
He was seen to be writing a program in MS WORD, highlighting keywords in colour by hand!
He did not last long on the course.