-
I don't know if it's just me, but I've observed that a lot of electronics engineers (either recent graduates or not) seem to end up in jobs where the majority of their work involves coding. I'm not talking about just firmware, but even getting into application development, Python, web development...anything not involving electronics. Some of my uni friends since graduating, whilst trying to get into electronics design, a lot of them ended up working as software engineers. Some of them even started in an electronics design role, but transitioned over into programming. Is there a reason for this? Is electronics as a job becoming almost 'obsolete' since there's so much we can do with code when it comes to an ARM board or an FPGA, that designing electronics isn't as highly valued anymore, where you can just use a stock standard circuit to get what you want done? Is electronics the sort of field that runs the risk of being out sourced to another country?
I work as an electronics/hardware engineer for the record, and I enjoy it, but I also find myself wanting to get involved with coding a lot more lately.
-
Re: Why do a lot of electronics engineers end up coding?
Because a lot of stuff needs code.
-
I would like to think its because the EE training allowed them to intelligently program hardware...
-
Because having "recompile time" in minutes instead of being counted in weeks is easier.
-
Because I can perfectly adapt the firmware to my hardware, since I designed both. Same goes for software.
Sent from my A0001 using Tapatalk
-
For a number of products today, due to the expected complexity, most companies will prefer to approach as much R&D time into the software, rather than the hardware, after all you can have 1 guy route in a different CPU in under a week, but writing the platform software to accomplish what you need may take a team 2 months the write and validate,
For myself, its because being the only hardware person at my company led me to being the only person who understood the peripherals enough to write software for it, Its surprisingly hard to get a young PC programmer in the mindset or registers and bare bones access, vice versa, I am horrible at PC software.
The other things that come with an electrical engineering background are things like power consideration, that peripherals can run independently of the core, and you know you can add a transistor to invert that signal, rather than writing a software protocol to make the highs a low. Its hard to escape your own knowledge but without background, somethings in a datasheet are very obtuse to understand.
-
Follow the money. There's more software than hardware.
Tim
-
There's no boundary between digital hardware and software, and it is becoming more difficult to distinguish between RF/analogue and software.
Hence people should be competent in both hardware and software, so they can choose the optimum partitioning for their current project.
-
There's no boundary between digital hardware and software, and it is becoming more difficult to distinguish between RF/analogue and software.
Hence people should be competent in both hardware and software, so they can choose the optimum partitioning for their current project.
That's true, but the big factor is there are several software jobs for each hardware job, and the software jobs generally pay better.
-
There's no boundary between digital hardware and software, and it is becoming more difficult to distinguish between RF/analogue and software.
Hence people should be competent in both hardware and software, so they can choose the optimum partitioning for their current project.
That's true, but the big factor is there are several software jobs for each hardware job, and the software jobs generally pay better.
The analog world is slowly being changed to small processors , software. Hardware itself is starting to be replaced with virtual displays of the hardware. The writing is on the wall for the next step. Software engineers. They do not demand overtime and never have sick days off. Good for business. It could be a problem if the end user is also virtual software. These human bags of mostly water will be in way. Maybe they could be moved to farms out in the country along with their winy opinions about how "they" started it all , blah blah blah.
-
Reading job adverts here in the UK it seems 'Firmware Engineer' is a solid 50% more salary than 'Electronics Design Engineer'...
-
Reading job adverts here in the UK it seems 'Firmware Engineer' is a solid 50% more salary than 'Electronics Design Engineer'...
So if you do both you get 250%, hehe.
-
Follow the money. There's more software than hardware.
Tim
Very true, and it’s been that way since I graduated in the mid 80s.
Despite a long career spent mostly in enterprise software, I only left electronics completely for a few of those years during the early/mid 90s, usually finding odds and sods of electronics work on to keep reasonably current. The biggest changes during my electronics hiatus was the almost compete shift from through hole to surface mount devices, together with massive increases in speed and frequency, plus a few regulatory bits and pieces, but none are insurmountable given a decent grounding (ooops, pun).
I make a living from both, although a couple of times in the past decade I’ve spent significant time (two or three years each time) semi retired, just doing the odd bit of electronics design and product development while my existing products pay their own way and bring in income. It’s far harder to get back into is the enterprise software than it is electronics: the pace of change in enterprise software is just so relentless, skills and marketability for work contracts become stale very quickly.
-
I design hardware and write the software to control the hardware. It's the best way to do it for small projects. Documenting the full hardware interface to a software engineer with no hardware experience is frustrating and error prone. Plus, if there are issues with the hardware then it becomes very difficult to debug if you don't know how to write code to test that hardware. Finally, there are some things that are essentially impossible for a pure software engineer to design. For instance a PID loop boost converter on a microcontroller. That requires such a tight interface between hardware and software that it would be utterly impractical to develop in any other way.
-
They do not demand overtime and never have sick days off. Good for business. It could be a problem if the end user is also virtual software. These human bags of mostly water will be in way. Maybe they could be moved to farms out in the country along with their winy opinions about how "they" started it all , blah blah blah.
I find that very difficult to believe. People writing software will be under exactly the same stresses as those designing hardware.
Personally speaking, I prefer hardware myself and find software more stressful, so I avoid it. I'm not motivated by money, so that's a non-issue. If my job was more software, I'd probably be taking time off for stress.
-
There's no boundary between digital hardware and software, and it is becoming more difficult to distinguish between RF/analogue and software.
Hence people should be competent in both hardware and software, so they can choose the optimum partitioning for their current project.
That's true, but the big factor is there are several software jobs for each hardware job, and the software jobs generally pay better.
The analog world is slowly being changed to small processors , software. Hardware itself is starting to be replaced with virtual displays of the hardware. The writing is on the wall for the next step. Software engineers. They do not demand overtime and never have sick days off. Good for business. It could be a problem if the end user is also virtual software. These human bags of mostly water will be in way. Maybe they could be moved to farms out in the country along with their winy opinions about how "they" started it all , blah blah blah.
Perhaps that was all tongue-in-cheek but I see little difference between EE and SDEs in terms of work ethic and salary expectations.
I was a SW Dev Engineer during the transition to software/firmware running everything. At the beginning (late 70s) the EEs sneered at software. Called us "software bunnies". But by 1990, there were a lot fewer electronics being designed that didn't have some sort of software involved. If you didn't have at least some software on your resume, you were at a disadvantage in the job market. And Wall Street significantly valued SW companies over HW ones.
In the end, the distinction between SW/FW and HW is blurred. With the advent of super cheap microcontrollers, a lot of circuits built with "jellybean" parts are getting replaced by them. A tiny bit of code and an 8 pin micro can replace multiple 555s, for example.
-
There's no boundary between digital hardware and software, and it is becoming more difficult to distinguish between RF/analogue and software.
Hence people should be competent in both hardware and software, so they can choose the optimum partitioning for their current project.
That's true, but the big factor is there are several software jobs for each hardware job, and the software jobs generally pay better.
The analog world is slowly being changed to small processors , software. Hardware itself is starting to be replaced with virtual displays of the hardware. The writing is on the wall for the next step. Software engineers. They do not demand overtime and never have sick days off. Good for business. It could be a problem if the end user is also virtual software. These human bags of mostly water will be in way. Maybe they could be moved to farms out in the country along with their winy opinions about how "they" started it all , blah blah blah.
Perhaps that was all tongue-in-cheek but I see little difference between EE and SDEs in terms of work ethic and salary expectations.
I was a SW Dev Engineer during the transition to software/firmware running everything. At the beginning (late 70s) the EEs sneered at software. Called us "software bunnies". But by 1990, there were a lot fewer electronics being designed that didn't have some sort of software involved. If you didn't have at least some software on your resume, you were at a disadvantage in the job market. And Wall Street significantly valued SW companies over HW ones.
In the end, the distinction between SW/FW and HW is blurred. With the advent of super cheap microcontrollers, a lot of circuits built with "jellybean" parts are getting replaced by them. A tiny bit of code and an 8 pin micro can replace multiple 555s, for example.
It goes much further than that, of course.
Consider microprogrammed (in the original "AMD2900" sense) processors, or the way that the spectre an meltdown alleged workarounds involve changing the way the intel processors execute instructions, or the "Minix buried in the intel processor".
-
Follow the money. There's more software than hardware.
Tim
Very true, and it’s been that way since I graduated in the mid 80s. It’s far harder to get back into is the enterprise software than it is electronics: the pace of change in enterprise software is just so relentless, skills and marketability for work contracts become stale very quickly.
Which sector of enterprise software? All sectors?
-
They do not demand overtime and never have sick days off. Good for business. It could be a problem if the end user is also virtual software. These human bags of mostly water will be in way. Maybe they could be moved to farms out in the country along with their winy opinions about how "they" started it all , blah blah blah.
I find that very difficult to believe. People writing software will be under exactly the same stresses as those designing hardware.
Personally speaking, I prefer hardware myself and find software more stressful, so I avoid it. I'm not motivated by money, so that's a non-issue. If my job was more software, I'd probably be taking time off for stress.
I did not express that the best way. A virtual engineer that is just a software program. Now we do not have to pay him or it. It happens slowly. a information technologist , IT guy , was in high demand in the past being on the leading edge of technology. Today this job is done by high school kids working part time. Part of the reason is software itself and standards are replacing the IT guy. He has not been 100 percent replaced by software but he is being handed his hat.
-
Thanks for the thoughtful replies. My main point that I may have not made clear was why some electronics engineers end up going out of the field, and just become programmers/SE? I realise firmware programming is a proper function of an electronics design engineer...but some seem to just totally leave the field, and up doing solely C, or something non-electronics related, such as Python, C++, etc.
-
Simply because many functions which used to be run by hardware are today run by software.
This reduces the component count and increases the number code lines. Its as simple as that.
-
Over here, there seems to be a spiral of decline in hardware. Employers want engineers skilled in specific areas, and are not really willing to do much training. Then they struggle to find engineers even with basic skills. People coming with electronics qualifications fail to even apply Ohms law. That leads to electronics design being outsourced, either as whole design or at a module level. With less jobs around, there is even less incentive for students to take it as career path.
The ratio of hardware to software engineers means that there more career opportunities in software, and the barrier to entry is a lot lower - no qualifications needed, just aptitude.
-
Follow the money. There's more software than hardware.
Tim
If you are a electronics engineer doing hardware just for the money, you are in the wrong job and probably not much good anyway. The best hardware engineers are there because they have a passion for hardware engineering first and foremost. I hire "volunteers", not "conscripts".
As far as skill goes, from my observations many hardware engineers make poor programmers although there are some exceptions, mainly because they have never learnt the art of programming. Yes good programming is an art, accompanied by discipline, knowledge and skill. Some while back, a hardware engineering colleague said, "No-one cares how the firmware looks or if it is readable, as long as it works." How VERY wrong - and unprofessional.
In Australia, experienced hardware design electronics engineers get paid pretty much the same as experienced firmware developers by the way.
-
Because software engineers rule the world. Each is his own deity. :) Why do you need to ask mortal?
-
Reading job adverts here in the UK it seems 'Firmware Engineer' is a solid 50% more salary than 'Electronics Design Engineer'...
Is it also possible because from employer's POV, a nice computer with nice screen and nice keyboard and heck a really nice mouse too, for the coder/programmer are still much-much cheaper, than a full fleet of electronics T&M equipments such as scope, PSUs, SA, LA, DMM, various niche probes and etc for the EE ? Better they get outsourced ? :P
As bean counter always said when looking at the GL .... asset depreciation stinks ... >:D
<duck n run away>
-
Thanks for the thoughtful replies. My main point that I may have not made clear was why some electronics engineers end up going out of the field,
Because there just isn't that big of a field anymore.
Hardware can only be done once; software is infinite.
-
Proper software should also be done once.
-
Proper software should also be done once.
Really? I believe the industry disagrees with you. From practice this approach leads to over time, over budget, over bugged software that no longer meets the users needs.
Needs change. If you attempt to analyse those needs upfront, then design, verify, implement, test, deploy your software by the time you have finished 2, 3, 4, 5 years later things have moved on and your software no longer fulfills the needs of the end users.
This is why in most software houses today software is redone every 2-3 weeks.
-
Honesty trip here...
Answering the original question with a single picture:
(https://i.imgur.com/zaNcShJ.jpg)
Things that require knowledge workers to do non tangible work which is poorly understood by other humans is a recipe for making a metric shit ton of cash. Also there are so many mediocre and poor programmers that it's very quick to rise to the mega-cash jobs if, to use an analogy, you know not to put your dick in a food blender. On top of that I have absolutely no intention of working for many more years because you only get one life and enjoying that is priority 1, not 2.
Ergo, I decided many years ago after being treated like absolute shit at a post graduate engineering position that I wasn't going to base my life on a "decisive career" and work my way up through the ranks like my parents told me I should. I job jumped around like feck, started a shield company and contracted myself out as ultra-pimped software engineer. Turned out that everyone else on the market was so dire it wasn't that difficult looking like a shiny golden nugget and taking relatively more wonga for the privilege. This has worked well for a long time.
Plus, honestly the best way to learn programming is from the bottom up, not the top down. Start at gate level and build a mental model of the machine. You learn a lot more and have a lot more applicable knowledge. Your typical front end developer who learned from the top down runs a fucking mile when you start talking about ordered hash tables, linked lists, FIFOs and such relatively simple concepts.
As for proper software should be done once; proper software is NEVER done.
-
The rule of thumb to design a new device is:
- 1 analog electronic engineer
- 10 digital electronic engineers
- 100 software engineers
-
Proper software should also be done once.
It probably costs four times, or more, in man-hours, to write "proper" software.
First you need the quality procedure over top of everything, empowered by management, to enforce strict rules on how things must be written, reviewed, and built.
You need boring (easily overlooked), rigid inspection methods to ensure that no corner cases are left untouched. Even an expert developer will forget cases from time to time. Or write an incomplete benchmark.
You need failure effects analysis. What if so-and-so part of the code glitches? What if the hardware glitches? What if the peripherals glitch (or fail mechanically)? Do you have redundant systems in place to handle those contingencies?
Heck, maybe it's more like tenfold cost increase.
There simply isn't the money for placing FAA or FDA-certified code inside IoT throwies.
It would take the collective decision that people aren't going to buy $5, somewhat annoyingly buggy parts, opting instead for $50 parts that are heavily regulated to be "mission critical". But that's just silly.
And that's the difference between, say, a Bluetooth pushbutton, and a Medalert bracelet.
And that's the reason why any non-trivial software is guaranteed buggy.
Tim
-
IoT throwies.
Just those two words combined into the same sentence make me want to explode in a fit of rage and go on a killing spree.
-
Thanks for the thoughtful replies. My main point that I may have not made clear was why some electronics engineers end up going out of the field,
Because there just isn't that big of a field anymore.
Hardware can only be done once; software is infinite.
As to the last sentence: how many posts on this forum say something along the lines of, “I’m going to use an Arduino (or Raspberry Pi) to do XYZ,” and, “Instead of rolling your own hardware, why not use an Arduino to do PDQ?” That’s the sort of mindset that results from removing hardware design from the list of solutions to whatever problem you might have.
And yeah, most of us with some embedded micro experience look at the Arduino and say, “well, that’s a fairly trivial design,” because it is. The value, as such, that it offers is the large library of software modules that allow the non-engineer to do something useful.
Occasionally, a suggestion is made: “why not base that [thing] you’re designing on an Arduino instead of making a board with some other microcontroller?” And the answer is obvious: since we have to spin a board to make the shield which makes our [thing] special, there’s really no savings involved, at least in terms of hardware. Space and power constraints moot the idea of Arduino-plus-custom shield, too. I can fit a whole lotta logic in the space of an Arduino.
And since we’re already competent in hardware design with embedded micros and FPGAs, we can bring up a new board quickly. We have a library of useful firmware modules that we understand, and we don’t need the handholding of the Arduino IDE.
All that said, many products use embedded processors that are very powerful and the products are complex, so it makes sense that there will be a team of software engineers working on the applications while only one or two engineers did the hardware design. A previous employer’s business was CPU boards (VME and later cPCI) and peripherals for same (PMC boards, from serial peripherals to network interfaces to video cards). It was common for one hardware engineer to do the board design (working with a layout person), while there were several people working on software: BSPs for VwWorks, Linux and such, drivers for the peripherals, application-level programs for customer-specific work, etc.
And that’s all in support of the assertion: hardware is done once, software is done many times. One hardware platform can handle many disparate customer requirements, with the differentiation being the software that runs on the platform.
-
We have a library of useful firmware modules that we understand, and we don’t need the handholding of the Arduino IDE.
The thing is the Arduino "IDE" is barely an IDE. It's a text editor with an upload Macro. You could plug the AVR-Dude command, a Make script and a launch serial terminal into Notepad++ and get the same functionality.
If you are suggesting that is too much hand holding then I'm curious to hear what you think of the likes of Eclipse or IntelliJ.
The attitude of "Oh, I'm a real programmer, I only use Vim" is so sad. Sure it's a skill to be able to use just Vim and the command line, but really a proper IDE will increase productivity, not because it take away from your skills but because it does all the little things that just take time.
Consider something trivial like creating a Java value object (Pojo) in Vim versus Eclipse. Even if you can type pretty fast I could probably produce 10 value classes in the time it would take to manually type it out in Vim. Add members... "Generate getters/setter", "Generate toString()", "Generate hashcode", "Generate equals()"... next.
Having integrated things like a proper debugger, type ahead/intelli-sense, error checking/correction, refactoring features can double, triple or more productivity.
-
If the typing speed is a limiting factor when programming, then something else is very, very wrong there. ^-^
-
If the typing speed is a limiting factor when programming, then something else is very, very wrong there. ^-^
Ok, I'll bite. How do you write code without typing it?
-
If the typing speed is a limiting factor when programming, then something else is very, very wrong there. ^-^
Ok, I'll bite. How do you write code without typing it?
Out source it to India.
-
Consider something trivial like creating a Java value object (Pojo) in Vim versus Eclipse. Even if you can type pretty fast I could probably produce 10 value classes in the time it would take to manually type it out in Vim. Add members... "Generate getters/setter", "Generate toString()", "Generate hashcode", "Generate equals()"... next.
Having integrated things like a proper debugger, type ahead/intelli-sense, error checking/correction, refactoring features can double, triple or more productivity.
Overlooks the question of why I'd want to create a Java value object in the first place. An overly complicated method of working spawns overcomplicated tools. Programming has gotten to the stage where it's like hiring NASA's Vertical Assembly Building to change the oil in your scooter.
-How do you write code without typing it?
Instead of typing 'x=3' you INCLUDE a 2.5GB library containing the value of x. That's how.
-
That's because it's cheaper and easier to reuse what someone else did, and probably did better than you.
Your average Java program looks like a few directories and a few files at most these days. That's mainly because all you do is glue the bits together that are already written.
-
Proper software should also be done once.
Right you are. But in reality nobody knows how to do this. We've only been doing real software for 50 years or so, and the rules change every few years!
Government project require this, since budget is only allocated once for 4 years, since that is end of the current voted in parliament. Which is why they always fail.
Name one government IT project that didn't go far above deadline or budget, or one that is working up to spec or secure.
-
Plus, honestly the best way to learn programming is from the bottom up, not the top down. Start at gate level and build a mental model of the machine. You learn a lot more and have a lot more applicable knowledge. Your typical front end developer who learned from the top down runs a fucking mile when you start talking about ordered hash tables, linked lists, FIFOs and such relatively simple concepts.
... or even have a concept about what a processor does when invoking a function with arguments. And as for the effects of L1/2/3 caches and NUMA memory, don't even think about it.
However, any novel software (i.e. not just a recapitulation of something already there) is usually best approached by a combination of top-down and bottom-up design and implementation. But that requires a plausible estimation of any risk factors in the implementation, which is back to the previous point.
-
Consider something trivial like creating a Java value object (Pojo) in Vim versus Eclipse. Even if you can type pretty fast I could probably produce 10 value classes in the time it would take to manually type it out in Vim. Add members... "Generate getters/setter", "Generate toString()", "Generate hashcode", "Generate equals()"... next.
Having integrated things like a proper debugger, type ahead/intelli-sense, error checking/correction, refactoring features can double, triple or more productivity.
Overlooks the question of why I'd want to create a Java value object in the first place. An overly complicated method of working spawns overcomplicated tools. Programming has gotten to the stage where it's like hiring NASA's Vertical Assembly Building to change the oil in your scooter.
-How do you write code without typing it?
Instead of typing 'x=3' you INCLUDE a 2.5GB library containing the value of x. That's how.
So while I do not fully agree with a lot of the over-engineering in Java, if you don't understand what a value object is or why you would be using them, then maybe you shouldn't comment about same.
You comment about x=3 again suggests you haven't developer enterprise level software before.
-
That's because it's cheaper and easier to reuse what someone else did, and probably did better than you.
Your average Java program looks like a few directories and a few files at most these days. That's mainly because all you do is glue the bits together that are already written.
You obviously work on different kinda of Java applications than I do. I have one here which takes 15 minutes for the IDE to refresh the workspace alone, consists of about 40 hierarchical sub projects and has around 50 classes and 150 dependencies in total. Granted it's a "Death Star" and I hate it with a passion.
-
I'm fully aware of those sorts of applications. We have a C# chunk which takes 3.4Gb of heap just after startup and average runtime heap of about 22Gb. 15,300 classes, 7.8 MLOC, nhibernate, COM components, bits of web, bits of WCF, lots of WPF, all sorts sucked in like a vampire.
That's not the right way to build applications though. Thank fuck it's only the front office stuff.
We're playing with silo'ing it and using tens of Dropwizard ( www.dropwizard.io/1.2.2/docs/ (http://www.dropwizard.io/1.2.2/docs/) ) instances to serve small chunks to the front end team who are all javascript wankers and want to use Electron.
-
Dropwizard is not bad for microservices which sounds like what you are attempting. The tricky part with microservices in the whole registry type stuff.
-
Yes that's basically it, although they are larger than microservices traditionally are representing an entire chunk of application and I don't like the term really as it is vision limiting. Already using consul for service discovery and health. It plugs into that nicely
-
That's because it's cheaper and easier to reuse what someone else did, and probably did better than you.
Your average Java program looks like a few directories and a few files at most these days. That's mainly because all you do is glue the bits together that are already written.
You obviously work on different kinda of Java applications than I do. I have one here which takes 15 minutes for the IDE to refresh the workspace alone, consists of about 40 hierarchical sub projects and has around 50 classes and 150 dependencies in total. Granted it's a "Death Star" and I hate it with a passion.
Sounds like you have problems that are nothing to do with Java and nothing to do with Eclipse :)
As someone said, companies are doomed to create products (and code) that mirror their org chart :)
-
As someone said, companies are doomed to create products (and code) that mirror their org chart :)
So true. Also same as the CTO's personally. So usually muddy shit ball.
-
If the typing speed is a limiting factor when programming, then something else is very, very wrong there. ^-^
Ok, I'll bite. How do you write code without typing it?
What I was trying to say was that the most important thing in programming is the design part.
Most of the time should be spent in the design phase. Before jumping to the implementation phase and before start typing, there is a long way.
- first, we need to know what are the goals and to understand those goals at a human level
- if there is a model to be implemented, or a process to control, this is the moment to understand it, and to find a mathematical model for it, if there is one
- identify the main blocks of the program/application whatever, and the dataflow between them
- break down those main blocks into smaller parts, group them in modules, libraries, functions, whatever
- pick the proper data representation and the proper algorithms for your data, estimate the amount of time and space required to process it. See if it can cope with future dataflow increases, or with sudden spikes
- while we are still in the bird's eye view mode, look for any side effect and look for security issues that might arise from the design we choose, like the dataflow or the breakdown of big functionality into smaller blocks
Later, we step into design implementation, where we start to look closer to the details, like allowed languages, hardware limitations, already existing libraries, licenses, breaking complexity into smaller and understandable pieces, and so on.
Typing, and the speed of it, is the last thing we should be worried about, because the amount of time and effort put into typing is negligible when compared with the whole process.
Just to be clear, I'm not saying we should write our programs using just toggle switches with 0's and 1's. A good IDE is gold, and helps a lot.
Sometimes, the IDE goes overboard with the amount of "help and nurturing", and become unproductive by hiding too much. If the IDE is to fluffy and eye candy, then the programmer will waste more time fine tuning the IDE and its plugins then actually thinking about programming. Each IDE have it's own idea about how the life should be, and it's pretty crazy that good programmers need to search online how to use an IDE button, and what exactly that button does, instead of simply typing
git add --all && git commit -m "Remove eye candy to increase performance"
The kind of fluffiness I was talking about in the IDEs is even more damaging when the IDE does hidden things that were never asked for, and the programmer realize only much later the damage made by the IDE's "good will".
TL;DR
The time spent for design should be much longer than the time spent with typing the code.
While any help is usually good for the moment, too much help can become damaging in the long run.
-
git add --all
Grrrr. Don't!
-
git add --all
Grrrr. Don't!
Lol. One of my pet hates too. “Oh you added all the solution metadata. Thanks dickhead!”
-
Yea that and the developer who hoards his work, works on 6 tickets and then "git add --all" them with the commit comment, "Lots of stuff".
-
Well i am a hardware engineer that sometimes ends up doing programing.
But i don't contribute in big software projects, that's what the big pile of software developers here are employed to do. What i end up doing instead is writing small tools or tests. Like making a firmware flashing tools, munching trough some test data in a automated way, putting together quick proof of concept things that involve a processor somewhere etc... But also there is FPGA programing, this is not something you can really give to any of the software developers here because they will give you the strangest of looks when you just show them any sort of HDL code.
In a serious product there is indeed more man hours spent on programing than anything else. I think its easier for someone who is fresh out of school to get into programing than hardware engineering. They will already have a lot of the required skills from school. Okay most might not be genius programmers, but they can spit out code that gets things done. Hardware on the other side needs lots of learning to get the grips with it and most of it is not taught at school. Its not all just plugging shields onto arduinos in the real world. Someone who does not tinker with electronics as a hobby is going to have a really tough time doing a hardware engineering job. So its simply easier to become a programmer.
-
Programmers should take the same approach to writing code as HW engineers do when designing a board, that is, consider as many failure modes as you can, understand that the world around us is not ideal and that Murphy works, always.
Sadly, the implement and let the customer do the beta testing mindset is slowly creeping into the hardware side, with very cheap manufacturing and components, we see lots of products on the market that shouldn't be out of the lab.
-
Well i am a hardware engineer that sometimes ends up doing programing.
But i don't contribute in big software projects, that's what the big pile of software developers here are employed to do. What i end up doing instead is writing small tools or tests. Like making a firmware flashing tools, munching trough some test data in a automated way, putting together quick proof of concept things that involve a processor somewhere etc... But also there is FPGA programing, this is not something you can really give to any of the software developers here because they will give you the strangest of looks when you just show them any sort of HDL code.
In a serious product there is indeed more man hours spent on programing than anything else. I think its easier for someone who is fresh out of school to get into programing than hardware engineering. They will already have a lot of the required skills from school. Okay most might not be genius programmers, but they can spit out code that gets things done. Hardware on the other side needs lots of learning to get the grips with it and most of it is not taught at school. Its not all just plugging shields onto arduinos in the real world. Someone who does not tinker with electronics as a hobby is going to have a really tough time doing a hardware engineering job. So its simply easier to become a programmer.
This is like a plumber telling people that being a mechanic is easy, you just stick some oil in the hole marked "oil", it's easy... plumbing... that's difficult that is.
There is a BIG difference between the kind of single file bugged, brittle non-sense you write at University as a single man team and what you work on in as a senior software engineer in a team of, potentially hundreds. Oh and half of your list of things that you do... we do as well as write software. ;) We consider those kinds of things "background activities" or "space fillers". Usually we don't even open a unit of work or ticket to "munch" some test data. It's part of the job ;)
"Spit out code to get things done", LOL. Clearly this is why some of the code I see coming from hardware types makes me weep.
-
Programmers should take the same approach to writing code as HW engineers do when designing a board, that is, consider as many failure modes as you can, understand that the world around us is not ideal and that Murphy works, always.
Sadly, the implement and let the customer do the beta testing mindset is slowly creeping into the hardware side, with very cheap manufacturing and components, we see lots of products on the market that shouldn't be out of the lab.
Again, plumbers and mechanics.
How many failure modes do you think there is in a peice of software such as ... lets say this forum?
I can tell there are probably in the order of 100 orders of magnitude more failure modes or cases than there is in most electronic circuits.
Electronics are fixed in time. Unless you physically swap a chip nothing changes. Software is evolved and updated constantly. The dynamics change. The users change, the users needs change, the environments change, the data changes, the database changes....
-
This is like a plumber telling people that being a mechanic is easy, you just stick some oil in the hole marked "oil", it's easy... plumbing... that's difficult that is.
There is a BIG difference between the kind of single file bugged, brittle non-sense you write at University as a single man team and what you work on in as a senior software engineer in a team of, potentially hundreds. Oh and half of your list of things that you do... we do as well as write software. ;) We consider those kinds of things "background activities" or "space fillers". Usually we don't even open a unit of work or ticket to "munch" some test data. It's part of the job ;)
"Spit out code to get things done", LOL. Clearly this is why some of the code I see coming from hardware types makes me weep.
Well im not saying people that come out of university are good programmers, but they can still do something useful so they have a reason to keep them around until they can get up to speed on how to work as a team on a bigger project. Give them some simple 1 to 3 day projects and work them up slowly.
On the other hand when you take one of my typical class mates out of electronics engineering and you give them a light dependent resistor sensor and tell them to turn on a 220V lamp using it you will get a whole lot of head scratching and perhaps a dead guy or two in the end.
And while i might be quite a good hardware engineer i probably wouldn't be that great at all if i was put in a team working on a giant C# project. But if they kept me around for a month or two id probably get into the swing of things and do reasonably well.
-
Programmers should take the same approach to writing code as HW engineers do when designing a board, that is, consider as many failure modes as you can, understand that the world around us is not ideal and that Murphy works, always.
Sadly, the implement and let the customer do the beta testing mindset is slowly creeping into the hardware side, with very cheap manufacturing and components, we see lots of products on the market that shouldn't be out of the lab.
So very true, on both counts.
-
Programmers should take the same approach to writing code as HW engineers do when designing a board, that is, consider as many failure modes as you can, understand that the world around us is not ideal and that Murphy works, always.
Sadly, the implement and let the customer do the beta testing mindset is slowly creeping into the hardware side, with very cheap manufacturing and components, we see lots of products on the market that shouldn't be out of the lab.
This
-
On the other hand when you take one of my typical class mates out of electronics engineering and you give them a light dependent resistor sensor and tell them to turn on a 220V lamp using it you will get a whole lot of head scratching and perhaps a dead guy or two in the end.
LOL.
I think they are each difficult, but in different ways. One aspect of software, in my experience of circa 32 years, with 16 of those professional is that it's always changing, but always staying the same.
The main issue with new and upcoming engineers is that jump on the first part, "It's always changing" and dance around the newly fangled stuff like it's the only way to do things, while completely missing the point on the second part... it's always staying the same.
The second part comes from the hardware side. It doesn't matter how many layers of hipster framework you stack onto a PC the architecture has been around for 40+ years and the underlying principles of that hardware have been around for a 100 years.
In software we have minor shifts in what we do everyday. It's always different. We have major shifts over few years and complete paradigm shifts ever decade or so. Take a look at a website in the year 2000 and a website today. From a software point of view there is virtually nothing the same between them.
In the hardware space things move much slower. Sure manufacturers bring out a new opamp, MCU or ADC from time to time, but usually they are fighting over small print specs in the market. Opamp circuits, MCU circuits and ADC circuits have been around as long as I have and probably longer. The barely change and when they do it's slow and slight.
When was the last ground breaking paradigm shift in electronics? The transistor? The IC? CMOS?
A good software engineer is one that can see down through the layers to see how it relates to the hardware it runs on, but respects that abstractions and re-usable code heirarchies make development, safer, faster and easier. Just gotta keep the hipsters at bay by playing Shirley Bassey "History Repeating" at them once in a while.
-
Programmers should take the same approach to writing code as HW engineers do when designing a board, that is, consider as many failure modes as you can, understand that the world around us is not ideal and that Murphy works, always.
Sadly, the implement and let the customer do the beta testing mindset is slowly creeping into the hardware side, with very cheap manufacturing and components, we see lots of products on the market that shouldn't be out of the lab.
Actually it is exactly this mind set that your propose the industry is moving away from.
A system is nothing without its users. It doesn't matter how long you spend or how much effort you put into defensive software, if it doesn't meet the needs of the users it's useless.
Studies have shown that it is not software bugs that cause failures and over budget, but user rejection and failure to meet user needs.
So the trend today is towards rapid, iterative and interactive model that includes the customer and the users at each stage. Plan, Develop, Demonstrate ,Evaluate, repeat.
The old school way of spending 6 months analysing requirements, 6 months designing, 6 months spec'ing, a year implementing the 2 years testing always resulted in software that was out of date half way through it's development cycle before it even left the door. So when requirements change as businesses and users change everything had to be paused and restarted back at the beginning.
-
On the other hand when you take one of my typical class mates out of electronics engineering and you give them a light dependent resistor sensor and tell them to turn on a 220V lamp using it you will get a whole lot of head scratching and perhaps a dead guy or two in the end.
That's so true it's funny.
We had one guy who managed to get on the course, I assume through bribing or being related to someone in the faculty, who was like that. I remember the first thing we did in the lab, which was build a simple low side BJT switch. Easy money. 2 minute job on a breadboard. So he connects the bulb across the rails and the transistor collector to +V and emitter to ground. Presses the button. "my light is turning off when I press the button, not on". After about the 6th press, the transistor had enough of taking the power supply across it as a short (which had gone into foldback) and blew it's arse out. Clearly the transistor was faulty according to the guy. 5 transistors later, even with several people sitting and talking him through why it was happening, he just couldn't see what he had done wrong. If someone fixed it, he would take the entire circuit to bits and put it back together wrong again and blow up another transistor. He had zero successful labs in 3 years. Not one positive result. Only got through because he tagged onto our team on group projects. :palm:
Programmers are far more dangerous when they walk away with that bag of spanners. They can destroy your company before lunch time.
-
git add --all
Grrrr. Don't!
Lol. One of my pet hates too. “Oh you added all the solution metadata. Thanks dickhead!”
Whoever choose a command line over a mouse click will most probably also be very aware of what file were produced there while working.
There is nothing wrong with 'git add --all' if the '.gitignore' file is properly maintained.
I bet the "magic" button for "repository save", or "submit", or "commit", or whatever other name a particular IDE is using, is also based on something like this:
.gitignore
**/*.crap
!**/dickeadsintentional.crap
-
The other issue it causes is that multiple different sub items of work get pulled into the same commit.
So you get "Changed X, Y and Z" and commit 15 files.
When in reality you change X in 3 files, Y in 10 different files and Z in another 5 files. When reviewing the history later can lead to you going, "You changed X in this file? This file has nothing to do with X"
Personally I use the command line 90% of the time, rarely the IDE, except when it comes to merging :) Then I do use a GUI merge tool.
-
Indeed. It discourages thinking. I personally git add every file independently.
Today I have done 93 commits.
-
Here whatever you do in your branch is your own business, but before you pull request to master you should squash your commits, either in your actual branch or a pull request branch for the purpose (if you want to keep your history).
This keeps a very clean timeline on master... or at least keeps the master timeline readable.
-
Spot on. Nice to know of people wielding a clue stick, unlike my immediate colleagues :-DD
-
On the other hand when you take one of my typical class mates out of electronics engineering and you give them a light dependent resistor sensor and tell them to turn on a 220V lamp using it you will get a whole lot of head scratching and perhaps a dead guy or two in the end.
That's so true it's funny.
We had one guy who managed to get on the course, I assume through bribing or being related to someone in the faculty, who was like that. I remember the first thing we did in the lab, which was build a simple low side BJT switch. Easy money. 2 minute job on a breadboard. So he connects the bulb across the rails and the transistor collector to +V and emitter to ground. Presses the button. "my light is turning off when I press the button, not on". After about the 6th press, the transistor had enough of taking the power supply across it as a short (which had gone into foldback) and blew it's arse out. Clearly the transistor was faulty according to the guy. 5 transistors later, even with several people sitting and talking him through why it was happening, he just couldn't see what he had done wrong. If someone fixed it, he would take the entire circuit to bits and put it back together wrong again and blow up another transistor. He had zero successful labs in 3 years. Not one positive result. Only got through because he tagged onto our team on group projects. :palm:
Programmers are far more dangerous when they walk away with that bag of spanners. They can destroy your company before lunch time.
Actually i saw a similar thing happen in school but with mains voltage instead. It was done with a school setup using shrouded banana cables and all so its really difficult to actually shock yourself. So his goal was to turn the light bulb on and off using a switch. So what he did was connect everything in parallel. Upon plugging it in the lightbulb lights up so woho thats a good start, then he presses the switch and BANG! Well the light did turn off but so did the circuit breakers for that part of the classroom. Tho when it comes to electricity related bangs the more impressive one was messing up the wiring for a contractor that switches a 3 phase motor between star and delta. The mistake caused it to switch into both star and delta mode simultaneously and as a result shorting all 3 phases together so things got pretty loud as you might imagine.
My point is that if you give someone fresh out of university a reasonably simple 3 day programing task to do you will probably get something that works after a week, might be ugly and inefficient but it gets the job done. Tell them what they did badly and give them some more work and after a few cycles of it they will probably become a reasonably useful software developer.
In contrast do the same with a engineering graduate and give them a hardware project and after a month you probably won't even have anything physical on the table. They won't know how to even begin designing a cirucit, they don't know how to search for parts online, they can't even find the supply voltage for a chip from a datasheet and even if they know what part they need they probably don't know where to even buy it. When it comes to PCB design they might know how to use Eagle to autoroute a PCB and when it comes to assembling PCBs they will have trouble soldering even large surface mount parts. When eventually the design is built and it doesn't work they don't have a clue how to use a multimeter or osciloscope to debug it. This guy coming to work as a hardware engineer is just going to result in frustration on both sides. The employer will see him produce very little of value and the employee will not like the job because he is constantly lost in how to do his job right. Not saying every hardware engineer has to be able design and layout 20GHz RF cirucitrs and DDR3 memory but there is a good deal to learn before you are a reasonably useful hardware engineer
-
So you mean that since I only started learning electronics in October and haven't (yet) blown myself up I'm doing better than the average bear? LOL
-
So you mean that since I only started learning electronics in October and haven't (yet) blown myself up I'm doing better than the average bear? LOL
Any pre-metamorphosis EE worth their salt would have had a good go at blowing themselves up or electrocuting themselves before going to university. That's my excuse, anyway.
-
Programmers should take the same approach to writing code as HW engineers do when designing a board, that is, consider as many failure modes as you can, understand that the world around us is not ideal and that Murphy works, always.
Sadly, the implement and let the customer do the beta testing mindset is slowly creeping into the hardware side, with very cheap manufacturing and components, we see lots of products on the market that shouldn't be out of the lab.
Yeah. My late boss thought that hardware development can be "agile". I hate that word. There is nothing agile, when producing a prototype is 2-3 weeks at least. And if you change the requirement late in the development, then get ready to dodge screwdrivers and heavier objects.
It feels like a typical software developer's solution to any problem is the same. "IDK, we code something, probably it will be OK, if it isn't then we code some more". They are like five year old children who try to get to the other side of a forest by running around aimlessly until they get tired.
All of them should start their career (instead of a web 3.0 IOT big data buzzword ) at an automotive or military company. Where software is tested, and verified, and it costs money. And this "whatever, we patch for 10 years down the line" mentality is just sickening to me. Its like they never finish a project, they just stop working on it.
-
In your world software for a small gadget costs £3 million and take 2-3 years to surface. The start up down the road will have it done for £30,000 and in 3 weeks. 3 million people will use it, the company make a fortune and only 30 people will complain it's a little buggy from time to time and they hate that it updates over the web frequently.
By the time yours is finished nobody actually needs it anymore. You spend another £3 million trying to make it useful, but the investors get bored and dump your company in the gutter. Just like 99% of government contract work. Although the UK government has embraced agile in the last half dozen years of so.
I know which company I want to work for.
-
Agreed.
Deliver MCO before someone else does.
-
1) Money
2) Because waiting for the SW guys takes too long
3) Money
-
In your world software for a small gadget costs £3 million and take 2-3 years to surface. The start up down the road will have it done for £30,000 and in 3 weeks. 3 million people will use it, the company make a fortune and only 30 people will complain it's a little buggy from time to time and they hate that it updates over the web frequently.
By the time yours is finished nobody actually needs it anymore. You spend another £3 million trying to make it useful, but the investors get bored and dump your company in the gutter. Just like 99% of government contract work. Although the UK government has embraced agile in the last half dozen years of so.
I know which company I want to work for.
Because there is obviously nothing between trowing shit out the door because AGILE. and doing full MIL qualification on any script you write...
That is the software problem... people so lazy and greedy they deny the problem even exists...
Can I bring an example? Any middle to high end EDA software package (that has been around for decades)starting from altium to sentaurs TCAD and cadence virtuoso (over 10k probably even over 100k) they are dated buggy pices of shit, hell se Taurus has you scripting in lisp (I mean Come on it’s FUCKING LISP) this is not fine... what is wrong with you all
-
What’s wrong with lisp? ;)
-
Sheesh, all this us vs them talk with amazingly broad brush strokes. For every "clueless SW guy blowing up a transistor" story there is one about some "EE guy making bone headed coding errors". The reality is far more nuanced than that. Any SW based project that needs more than one programmer will fail without some level of discipline. The wild-man programmer that says "ship it now, fix it later" is a more of a myth than not. And, every good team has at least one adult making sure that doesn't happen.
Like it or not (and I suspect a lot of not out there), software is here to stay. It's at the heart of just about everything we build. An EE that doesn't have at least an understanding of programming is at a very distinct disadvantage in the marketplace.
-
Err yes that’s me. There are 79 people on MY team. And we ship fast and often. And we don’t blow up transistors. The guy who blew up the transistors probably works in mcdonalds now. We only have people who don’t blow up transistors. And some people who don’t blow up engines and some people who don’t blow up calculators too. I think there are more hardware engineers on our software team than software engineers. And we just write software. That is the status quo at the moment. In the 20 odd years I’ve been writing software, bar some web shops, that’s how we all roll. I’d argue the exception is the other way round.
It’s a mish mash. You have to be good at both. Why do you think MIT broadly glues both disciplines into one subject now.
-
What’s wrong with lisp? ;)
Another one of those fad languages, ISTR the AI crowd of the mid 80s were into it. Like Prolog. Yes, AI was quite the thing to be into in academia in the mid 80s. Just like flared trousers, these things keep coming back to haunt us!
-
Things that require knowledge workers to do non tangible work which is poorly understood by other humans is a recipe for making a metric shit ton of cash. Also there are so many mediocre and poor programmers that it's very quick to rise to the mega-cash jobs if, to use an analogy, you know not to put your dick in a food blender.
But hardware/electronics engineers are just as valuable knowledge workers, I mean they possess the sort of knowledge that, as another forum member put it, takes more time to understand and develop. The fact that there’s more SE than HE is a testament to that...but I think this is where the point about software requiring constant renewal comes in, which is where the higher pay comes from.
contracted myself out as ultra-pimped software engineer. Turned out that everyone else on the market was so dire it wasn't that difficult looking like a shiny golden nugget and taking relatively more wonga for the privilege. This has worked well for a long time.
So did you transition from an electronics engineer to software? Is that your everyday role?
Thanks for the replies everyone, it’s given me a lot of perspective and understanding. I’d like to quote more people but this would become too long of a post. I will continue to develop my software/programming skills since it seems like it’s a definite requirement. But my concern still remains: that electronics/hardware engineers are slowly “dying off” and I’ll eventually have to succumb to programming as a job.
-
What’s wrong with lisp? ;)
Another one of those fad languages, ISTR the AI crowd of the mid 80s were into it. Like Prolog. Yes, AI was quite the thing to be into in academia in the mid 80s. Just like flared trousers, these things keep coming back to haunt us!
LISP was one of the first languages, contemporary with COBOL and FORTRAN, and predating Algol-60.
There is a reason it keeps coming back. There's an old aphorism "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
https://en.m.wikipedia.org/wiki/Greenspun%27s_tenth_rule
More cynical people will simply remember George Santayana's famous quip.
-
Indeed. LISP is hardly a fad language. It's a bit meta and requires a different line of thought. SBCL craps on a lot of environments.
Also, wonderful for data representation. Don't have to parse s-expressions!
I suggest people sit down with some red bull, acid tabs and a copy of Structure and Interpretation of Computer Programs.
My domain and company name is a lisp primitive ;)
-
The wild-man programmer that says "ship it now, fix it later" is a more of a myth than not. And, every good team has at least one adult making sure that doesn't happen.
I wish I could agree, but that attitude seems to be a cancer that is spreading throughout the industry, at least in software. Everything is becoming a perpetual beta, with frequent updates touted as a feature rather than being honest that it is mostly stuff that should have been done/fixed before the product ever shipped.
-
Indeed. LISP is hardly a fad language. It's a bit meta and requires a different line of thought. SBCL craps on a lot of environments.
Also, wonderful for data representation. Don't have to parse s-expressions!
I've always thought of XML as being a triumphant reinvention of LISP, albeit without all the interesting powerful bits.
-
The rule of thumb to design a new device is:
- 1 analog electronic engineer
- 10 digital electronic engineers
- 100 software engineers
This. Most hardware doesn't do much interesting with software -- not just firmware, but often drivers, application software, network software, database software, mobile software, the list goes on. And the higher levels tend to be more complex even if it seems simpler due to higher level languages and library abstractions. Application software simply has more ways to go wrong. It interacts with more unpredictable things including the most unpredictable component: the user. So it takes more development effort to get something reliable that actually does what the user wants. A lot of that development is best done by someone who understands the hardware behind it all.
In fact, its not just hardware engineers. Almost every technical job is going this way to some extent. If you want to do X, you are going to need to tell a computer how to X. If nobody has done exactly your X before, that is going to involve something similar to coding.
-
Well it doesn't always take a hardware engineer to do well with low level code and drivers.
The guy here that is super good with coding low level things actually went to school to be a IT technician but is now a all round bad ass programmer here. And the guy who keeps all the IT running here is actually a programmer (But really good at high level coding too)
Oh and yes "agile hardware". The prototype is already on my table, all packed tight into the small case and then in comes "Can we add WiFi to this? The competitors product is supposedly gonna have WiFi so we need on ours too" |O
-
Don’t get me wrong I never (really used lisp for programming) but I hear it has a pretty powerful meta programming system and Evan be the best language around when that is needed, so I’m not in the fad camp...
BUT
In that eda suite lisp is used as a scripting language, and that is probably the worst choice ever... they could have chosen any other language designed as a scripting language,
-
The rule of thumb to design a new device is:
- 1 analog electronic engineer
- 10 digital electronic engineers
- 100 software engineers
This. Most hardware doesn't do much interesting with software -- not just firmware, but often drivers, application software, network software, database software, mobile software, the list goes on. And the higher levels tend to be more complex even if it seems simpler due to higher level languages and library abstractions. Application software simply has more ways to go wrong. It interacts with more unpredictable things including the most unpredictable component: the user. So it takes more development effort to get something reliable that actually does what the user wants. A lot of that development is best done by someone who understands the hardware behind it all.
In fact, its not just hardware engineers. Almost every technical job is going this way to some extent. If you want to do X, you are going to need to tell a computer how to X. If nobody has done exactly your X before, that is going to involve something similar to coding.
Best done by someone that understands the hardware. On the surface one can not disagree with this. However there are examples in real life where keeping software and hardware under different roofs had a positive effect. When Bill Gates started out with GW Basic there was a decision to separate software from hardware. If the software said print "A" and the printer did not print "A" you phone IBM and complain to their hardware guys. The end result was it brought out excellence in both software and hardware. The hardware guys could not cheat by changing GW basic. They had to rewrite the BIAS on the mother board to make sure print "A" will print "A" period. This allowed software guys to do what they were supposed to do write code with confidence that it will be executed as written. This lead to top down software such as visual basic where dragging a browser icon over writes thousands of lines of code and 10s of thousands of lines of hex for a net browser. In this case separating hardware from software had a positive effect on both hardware and software development.
-
I feel that hardware engineering is way more of a trade skill if you want to be good. You need to know like thermal design, materials science, shop skills, physical understanding of the process you are controlling, radio behavior, wave theory understanding, mechanical aspects..
Not to mention heading of the industry, like knowing which direction to handle a problem with.. I.e. you can have like 20 different circuits which handle the same problem, all with their own quirks.. Then you need to imagine the device being in the field, infteracting with various things...
I mean your physically putting something somewhere. Plus you need to look at costs and ultimately make the decision of utility vs parts cost.
I think that programming is less stressful to learn, you dont really need to tango with decisions made by a supercorporation making integrated circuits.. Im sure everyone here has wished "why the fuck could they have not just made this spec a little different"...
Systems engineering is always more in the mind of a hardware engineer then a software engineer.
-
I think there is a lot of people here who just don't understand software or how it's developed. I expect a portion believe that all software is just like they write for a MCU.
A recent comment is probably the dumbest yet.
I think that programming is less stressful to learn, you dont really need to tango with decisions made by a supercorporation making integrated circuits.. Im sure everyone here has wished "why the fuck could they have not just made this spec a little different"...
No, us software engineers don't need to deal with specs for software modules and specs for hardware at all. Nope, we program in the ether, devoid of all interaction with other code or ICs.
Consider that you cannot write a single line of code without there being a spec behind it. Not one single line. Even if you get down to assembler there is still the spec of the processor. Most "lines" of code huge amounts of dependencies, involve libraries and other components which have protocols, specs and our equivalent of "datasheets" which are API documents. Then there is of course the OS and the users config and the vast variance of the environment, which in the hardware world doesn't really change. Nobody suddenly changes the specs of your IC 2 years later and bricks all your products.
-
A recent comment is probably the dumbest yet.
It's ok if you just stick with arguments. No need to be arrogant.
Your opinion is just an opinion, like everybody else.
-
It's ok if you just stick with arguments. No need to be arrogant.
Your opinion is just an opinion, like everybody else.
Arrogant? You are suggesting that software development is easy. So I believe it deserves educated arrogance in response to an insulting and unfounded comment of ignorance.
Opinion? I have over 30 years experience with around 16 years professional and hold the position of lead developer. Billions of dollars have passed through my code and millions of people have used it. I think that puts weight in my opinion.
My last paragraph is simply fact, not opinion. If you can find evidence to deny it then I will accept your "opinion" with more weight.
-
As to the comments about juniors in code having an easier time getting up to speed. Sure they don't blow themselves up and they get things kinda working, but its probably the same as it is in EE space. One glance from an experienced developer and there are face palm moments. It is likely it solves the happy path because that is all Universities teach, short snippets and happy path dev.
When you ask them things like, "What if there aren't any orders in the list?", they stare dumbly at their code and then a glimmer of "Ohhhhh... oops." spreads across their face. So you send them away to fix the dozen or so time bombs in their code and it comes back as a jumbled pile of criss-crossed botch work attempt to address the corner cases. They are told to rewrite it from scratch. There are two ways to do things. Right and again.
It takes experience to see the problem, the solution and the solution's problems then structure and design the code to flow neatly, concisely, maintainable and extendable. It is not easy.
I think it's no harder or easier than EE, each have their easy parts and difficult parts. Experience always pays off. Teams need to be well balanced between juniors and seniors and we all need to fight the bean counters and trigger happy stress kitten project managers.
-
I think there is a lot of people here who just don't understand software or how it's developed. I expect a portion believe that all software is just like they write for a MCU.
A recent comment is probably the dumbest yet.
I think that programming is less stressful to learn, you dont really need to tango with decisions made by a supercorporation making integrated circuits.. Im sure everyone here has wished "why the fuck could they have not just made this spec a little different"...
No, us software engineers don't need to deal with specs for software modules and specs for hardware at all. Nope, we program in the ether, devoid of all interaction with other code or ICs.
Consider that you cannot write a single line of code without there being a spec behind it. Not one single line. Even if you get down to assembler there is still the spec of the processor. Most "lines" of code huge amounts of dependencies, involve libraries and other components which have protocols, specs and our equivalent of "datasheets" which are API documents. Then there is of course the OS and the users config and the vast variance of the environment, which in the hardware world doesn't really change. Nobody suddenly changes the specs of your IC 2 years later and bricks all your products.
While I agree with your comment about software, your point about IC specs not changing is wide of the mark. Let's consider something simple like single transistors.
There 2N3055 has changed so many times that you can't necessarily replace failed devices with modern ones; they have a significantly highter ft and can oscillate.
Then manufacturers simply stop manufacturing components (especially RF and low-noise components), even though they have valuable attributes and there is no replacement. That doesn't happen with software.
-
I think you missed my point. When you ship your product how often do the specs of the IC change in the field?
-
I think you missed my point. When you ship your product how often do the specs of the IC change in the field?
You can't copy hardware in the same way you copy software. The next batch isn't guaranteed to be the same. Nominally identical devices from different manufacturers can be subtly different. Ditto devices from the same manufacturer if they tweak their process.
Imagine if you wanted to sell your program in 5 years time, and you had to stockpile bits to make sure they hadn't changed in the intervening years :)
-
At least when you test something before it goes out the door you know it will work. For domestic software the number of configurations is immense and the fluidity causes real issues trying to write software that remains reliable over time. Something always upgrades and changing the playing field in a way you didn't expect.
And... 5 years down the line when you decide to re-release a new version, the latest versions of your libraries have all changed and some of the features you were using have been deprecated or changed ... or the entire component you were using has ceased to exist for modern OS's. It would sometimes be easier to completely start again. Consider those millions of web applications based on Adobe Flash. Consider those websites that used Photo Bucket.
-
At least when you test something before it goes out the door you know it will work. For domestic software the number of configurations is immense and the fluidity causes real issues trying to write software that remains reliable over time. Something always upgrades and changing the playing field in a way you didn't expect..
Now tell us how Intel is changing the internal operation of its processors that are out in the field, in response spectre and meltdown.
There is vanishingly little difference between hardware and software.
-
At least when you test something before it goes out the door you know it will work.
From the context I assume you are trying to say that when you send hardware out the door you can know that it will work. Was that intended as a joke?
-
Now tell us how Intel is changing the internal operation of its processors that are out in the field, in response spectre and meltdown.
There is vanishingly little difference between hardware and software.
By changing the microcode. Which is software. Can you touch it? No. It's software.
But... I concede your point. However what impact do those changes have on the hardware beyond it? Will your USB mouse stop working? Will your Gfx card stop working?
Now you could argue it is because the hardware interfaces are better defined and thus Intel can test they don't break said interfaces.
There are differences, they are complex. Software tends to exist in a much more fluid world. The dependency scope is more diverse. However the life cycle is much shorter.
Software that works as designed for decades requires a fixed environment, just like hardware. But fixed environments are not what we have today. We have a rapidly changing one. Software that worked 5 years ago on modern computers did not work on computers 5 years before that and it might not work on computers today.
-
At least when you test something before it goes out the door you know it will work.
From the context I assume you are trying to say that when you send hardware out the door you can know that it will work. Was that intended as a joke?
Let me be clearer. The design functions. You build it, you test it, you ship it. Unless it breaks, it will function as designed. If it breaks, it's broken.
Which reminds me about a previous attack on software, this concept that software should be "done right the first time" and not have any of this non-sense updates. This works if you fix the environment. The software will always do what it was intended to do. It doesn't age, it's caps don't dry out, give it the same environment and it will run as designed and shipped forever. But that never happens and expecting your 0 gauge train to run your new 1/4 gauge tracks is rather short sighted.
In software we get this type of support ticket all the time:
"I took your 8 bit shift register and put it in a socket for a 16 bit one and it didn't work!"
Of course, it can have genuine bugs. But so can hardware.
-
I've had a few jobs out of college now, and while I keep starting off in the EE space, I always get steered into software some how.
I've finally settled on a happy medium of just doing software at work, and doing all my EE at home in the form of hobbies. Schematic capture/board layout/design is much more enjoyable when you can just choose or change whatever component you want, without having to dig into requirements and show 10 pages and 120 minutes of meetings to get someone to sign off on an ECO. It's also kept me from burning out on electronics, at home I can't stand writing software; I'd be worried that after 10 years in peer EE I'd get burnt out of doing it, which is where my real passion is.
-
Now tell us how Intel is changing the internal operation of its processors that are out in the field, in response spectre and meltdown.
There is vanishingly little difference between hardware and software.
By changing the microcode. Which is software. Can you touch it? No. It's software.
But... I concede your point. However what impact do those changes have on the hardware beyond it? Will your USB mouse stop working? Will your Gfx card stop working?
Now you could argue it is because the hardware interfaces are better defined and thus Intel can test they don't break said interfaces.
Not quite. Whether or not it is "microcode" has not been published; Intel even bought Altera so as to merge FPGA technology into their processors.
Either way, Intel is changing the implementation of their API, i.e. the instruction set, in ways that break the system. That's why their changes are being rolled back :(
There are differences, they are complex. Software tends to exist in a much more fluid world. The dependency scope is more diverse. However the life cycle is much shorter.
Software that works as designed for decades requires a fixed environment, just like hardware. But fixed environments are not what we have today. We have a rapidly changing one. Software that worked 5 years ago on modern computers did not work on computers 5 years before that and it might not work on computers today.
Those are merely differences of scale, not of kind.
-
I am sensing a lack of respect for professionally written software. An engineer will write software to take care of business and think that is enough. It is not enough for a business as they want portable software. The meaning of portability software is that you can drop the source code anywhere and they will understand without the need to have it explained in detail. A business does not want to be held for ransom by an engineer because as the source code is unstructured therefore cryptic in it's nature. For a professional writing software to get the job done is a low priority. It would be gigantically stupid to take on a software project if engineers had not already made sure it was doable. The purpose of a professional at software is to translate cryptic software into cognitive structured top down so that the software is portable and therefor the baton can be pasted on to others . An example of professional software is it will always start off simple Do while , input , process , output , end program. In four simple statements the entire program has been divided into three parts of input , process , output. The goals of a professional in software is portability to a common cognitive top down structure that other programmers will recognize and drop to their keens saying thank you. It is not getting it done rather writing code that is portable.
-
In every commercial software project I’ve ever been involved with there are dozens of requirements, portability -may- be one of them. Engineering is all about balancing those sometimes conflicting requirements.
Portabilty is not the same as readability. Furthermore, portability to what and for what purpose? Portability covers many things.
It’s not uncommon to split a project into functional and non-functional requirements, and regrettably many project managers view non-functional items far less importantly to functional items, so things like performance, readability, and security often get bolted on very late in the project or even descoped.
I’ll also add that almost always, getting a product out of the door trumps everything else.
-
It's ok if you just stick with arguments. No need to be arrogant.
Your opinion is just an opinion, like everybody else.
Arrogant? You are suggesting that software development is easy.
This is an internet forum. People have opinions. They can and will be different than yours. No need to call it dumb.
It doesn't help the discussion.
Opinion? I have over 30 years experience with around 16 years professional and hold the position of lead developer. Billions of dollars have passed through my code and millions of people have used it. I think that puts weight in my opinion.
Unfortunately, not everybody tells the truth. I could write that I have much more experience than you and that my opinion
is better than yours. No need to call it dumb. Just stick with arguments.
My last paragraph is simply fact, not opinion. If you can find evidence to deny it then I will accept your "opinion" with more weight.
You make a statement, you proof it.
You can't make a statement and say it's a fact till somebody else proof it isn't.
-
This is an internet forum. People have opinions. They can and will be different than yours. No need to call it dumb.
It doesn't help the discussion.
Unfortunately, not everybody tells the truth. I could write that I have much more experience than you and that my opinion
is better than yours. No need to call it dumb. Just stick with arguments.
You make a statement, you proof it.
You can't make a statement and say it's a fact till somebody else proof it isn't.
I'm sorry, but I'm tired this millenial attitude that every opinion is as valuable. Some opinions are dumb when faced against the facts.
I could post my linked in or my CV but I'm prepared to do so. I will let it pass that you effectively called me a liar.
Do I need to prove the sky is blue?
https://docs.oracle.com/javase/7/docs/api/ (https://docs.oracle.com/javase/7/docs/api/)
https://docs.python.org/2/library/ (https://docs.python.org/2/library/)
http://www.cplusplus.com/reference/ (http://www.cplusplus.com/reference/)
I could go on all day, because I can't find a single instruction I could write in any language that will not have a specification behind it. The annoying ones are the ones that DONT have a well written spec. Some give you one line at best and you have to trust it's implemented intuitive and does what you expect.
-
I'm sorry, but I'm tired this millenial attitude that every opinion is as valuable. Some opinions are dumb when faced against the facts.
Agreed. Shame it has to be said, but that is in itself a sign of the times.
I could go on all day, because I can't find a single instruction I could write in any language that will not have a specification behind it. The annoying ones are the ones that DONT have a well written spec. Some give you one line at best and you have to trust it's implemented intuitive and does what you expect.
Unfortunately I can, and there is an entire class of such languages that are seductively fashionable: DSLs. Lemma: a compsci graduate that can't work on compilers will yearn to create their own DSL.
Most DSLs:
- start out being small languages for limited purposes, often to allow "easy scripting" of operations written in a conventional well-understood language
- end up as cancerous growths as new features are added
- the misfeatures interact in unexpected ways
- the original
designer perpetrator doesn't fully understand their creation, leading to "operational semantics"[1] - there's no tool support
- if you can hire anybody to develop/maintain such stuff, you have to spend time/money training them
There are some well-considered DSLs (e.g. for FSMs), but they still suffer from the tooling and maintenance problems.
Usually a decent DSLibrary is preferable to a DSLanguage.
[1] a.k.a. suck it and see
-
In every commercial software project I’ve ever been involved with there are dozens of requirements, portability -may- be one of them. Engineering is all about balancing those sometimes conflicting requirements.
Portabilty is not the same as readability. Furthermore, portability to what and for what purpose? Portability covers many things.
It’s not uncommon to split a project into functional and non-functional requirements, and regrettably many project managers view non-functional items far less importantly to functional items, so things like performance, readability, and security often get bolted on very late in the project or even descoped.
I’ll also add that almost always, getting a product out of the door trumps everything else.
the problem is in the last sentence...
It seams to me that getting a product out of the door doesn't only trump portability and other non-functional requirements...
it often trumps getting the product working as it should
there could will very well be specifications describing every code path in the program, I don't know that, but I argue that bugs will most definitely not be as coded to a spec
so in the end what I as a end user am left with?
A very expensive software (MATLAB) that does not work because of bugs in a primary feature(launching simulink simulation through a script) that was working flawlessly in the previous release that has me cursing and swearing for almost 2 days as i try to find a way to make said feature work again and at last having to resort to installing the previous release to be able to work, this is just one of the countless times this has happened
Now I don't know if all of this is due to a management decision to ship the product anyhow or if it has just gone through the cracks due to a lack of any meaningfull QA worth it's name or else and I frankly don't care...
if i pay for a product it has to do what it says on the tin (ok marketing wank aside), I can gloss over the occasional bug but that is about it, if a touted feature does not work after 2 days of trying as per what the manual says and the software spews out some criptic ungooglable internal error I get pissed of to whoever wrote, tested and sold the dang thing.
And also as with everything the more I pay for a software the more i expect it to work flawlessly
if a 400$ rigol scope is slow and has some of the features are buggy I can gloss over it
if the same happens with a 20,000$ lecroy there is not a chance in hell i let it slip
-
On specs...
I had a grr moment in work on Thursday. I wanted to do a quick unit test for a timer. It's school boy stuff.
auditCounter.startTheClock();
// do something
auditCounter.stopTheClock();
Now, I knew peril lay ahead, but I thought I would give a unit test a go.
auditCounter.startTheClock();
Thread.sleep(1234);
auditCounter.stopTheClock();
assertEquals( 1234, auditCounter.getTimeMillis() );
Now I was expecting a failure with 1235 millis, maybe, but no. I got 1233 about 10% of the time.
So I checked the spec of Thread.sleep() and to paraphrase:
"We have no idea how long it will sleep for, YMMV. It's all them hardware guys fault."
(Actual spec):
Causes the currently executing thread to sleep (temporarily cease execution) for the specified number of milliseconds, subject to the precision and accuracy of system timers and schedulers. The thread does not lose ownership of any monitors.
Source: https://docs.oracle.com/javase/8/docs/api/java/lang/Thread.html#sleep-long-
A few disclaimers.... testing timing is always tricky. Thread.sleep() has never guaranteed how long it will sleep for. I was running it on a virtual instance which means virtualisation of the CPU timers which is always plagued with inaccuracies. Timer accuracy on 32bit intel was only 100Hz. I believe on 64bit it's still only 1000Hz. If you want finer grained timers you need to use hardware RTCs or calibrated CPU delay loops.
I grumbled, muttered, oFFS, decided to revert to, "Testing timers is a fools errand." and deleted the test. I did consider leaving it for the next guy along knowing it would fail 10-20% of the time, but I'd too nice for that.
I replaced it with a test without a sleep and checked it too less than 1 millisecond. Even that could fail in a tiny random number of cases, though highly unlikely.
EDIT: Of course... this timer is non-critical. All it does it output for audit purposes how long the job took in seconds.
-
Sorry to dilute the topic, but on that unit test and since software quality has been aggressed here on topic.
The purpose of a unit test is not just to demonstrate your code works. The purpose of a unit test is to know your code still works when someone else has been in rummaging around and changing things.
In the example I gave above my "took less than a millisecond" is a dangling live wire. If someone were to come in later and want micro second timing or single second timing and make changes to the start and stop methods, my unit test may not fail when they change the resolution of what start and stop use and not updated getTimeMillis() - they may not use it or care for it.
Thus a better "botch" around timer inaccuracies would be to sleep for say, 3 milliseconds and check that getTimeMillis() returns somewhere between 1 and 4 milliseconds.
Another approach is to remove the sensing from the timer class entirely and pass the current system millisecond counter into the start and stop methods, so the unit test can inject test values instead of relying on the system timers. Of course that moves the problem to the method that uses the start and stop methods.
Testing is not always as easy as it seems. Dates, times, timers and other "not entirely deterministic" things are always difficult. If I had £1 for everytime I have had to fix a unit test from an American because they hard code date/times in EST, I'd be rich.
-
A few disclaimers.... testing timing is always tricky.
It is worse than that.
If you are testing timers then it is a red flag since it usually indicates bodged system design, analogous to having monostables or chains of inverters in a digital circuit, or select-on-test components in an analogue circuit.
Testing timing is more difficult than most people imagine: you really need multiple executions under representative conditions, and to plot the PDF/CDF of the timing distribution.
-
I’ll also add that almost always, getting a product out of the door trumps everything else.
the problem is in the last sentence...
It seams to me that getting a product out of the door doesn't only trump portability and other non-functional requirements...
it often trumps getting the product working as it should
I didn't say I agreed with it! ;-)
-
A few disclaimers.... testing timing is always tricky.
It is worse than that.
If you are testing timers then it is a red flag since it usually indicates bodged system design, analogous to having monostables or chains of inverters in a digital circuit, or select-on-test components in an analogue circuit.
Testing timing is more difficult than most people imagine: you really need multiple executions under representative conditions, and to plot the PDF/CDF of the timing distribution.
In fairness, I'm not actually testing the timing. I'm just testing that the code counts and returns milliseconds between start and stop. Not that the timing is accurate.
-
In every commercial software project I’ve ever been involved with there are dozens of requirements, portability -may- be one of them. Engineering is all about balancing those sometimes conflicting requirements.
Portabilty is not the same as readability. Furthermore, portability to what and for what purpose? Portability covers many things.
It’s not uncommon to split a project into functional and non-functional requirements, and regrettably many project managers view non-functional items far less importantly to functional items, so things like performance, readability, and security often get bolted on very late in the project or even descoped.
I’ll also add that almost always, getting a product out of the door trumps everything else.
I should say the word portability is one that I use in a descriptive way. My meaning was top down program structure using variables with long names to make it as easy as possible for the next guy to understand. This opens the door for team work. For example I could emial my "structured" source code and ask you for advise on this or that. As long as the code is structured you can see what I am doing with a quick glance.
There is also the issue of the length of a program. With thousands of lines one can get lost in their own code. This is when the structure KISS philosophy pays off. KISS is short for Keep It Simple Stupid.
out the door trumps everything. I would add out the door and making a profit trumps everything else with focus on makes a profit. Not easy considering the apps on my phone and computer are all free downloads.
The internet connects so many bright minds from around the world all armed with a common android operating system. This leads to excellence in software. My phone is also a GPS. There is a ton of code in that puppy and it is free. Good for me but where is the profit for the bright sparks that wrote code?
-
Top down design is less useful these days. Bottom up is sometimes more useful.
Certainly in the OOP paradigm software isn't really seen as a hierarchy of operations that break down from very high level blocks into finer and finer details. That technique dates back to JSP (Jackson Structured Programming) but it becomes brittle and confusing when considering asynchronous operations and concurrency and given that a lot of code written today has some form of network in the mix or multi-threading, top down design is awkward and doesn't really portray the neatness and understandably suggested.
In OOP sofware is seen more as a peer network on collaborating "objects". Objects should be encapsulated and private about their details. They offer a service via a contract (interface) and as long as they full fill that contract everything is grand.
Objects are often then grafted into design pattern building blocks. Design patterns being the results of academic analysis of common programming problems. An example is the Factory Pattern which allows you to decouple the implementation of an interface from it's actual code, so the later can be swapped at run time or at least without code change. This concept is then adapted and extended to give us "Dependency Injection" to meet the "Inversion of control" or "Inverse dependency" structures, where classes/objects do not depend on fixed individual collaborating instance peers, but the outside code provides the implementation to the object. Hence "dependency injection". It is highly useful for creating test harnesses or stubbing out components with fixed known quantities to isolate test scope. A popular example would be replacing a DAO (Data Access Object) with one that returns hard coded values, rather than test the database as well (that would be done elsewhere).
So, maybe you can come to see that the question: "Where is the top?" is highly ambiguous and not very easy to answer. There are many tops. Just following from the "int main()" method down will not lead you to all entry points either.
Certainly, "Top down design" is a useful tool in the box and would be used when breaking down a complex operation or behaviour, but it's just one tool. Bottom up design, test driven development, object analysis, dictionary analysis, flow charts, sequence diagrams... it's a rather big tool box.
-
I'm sorry, but I'm tired this millenial attitude that every opinion is as valuable. Some opinions are dumb when faced against the facts.
Agreed. Shame it has to be said, but that is in itself a sign of the times.
It should be noted, of course, that calling an opinion out as "dumb" when levelled against the facts is not the same as calling the person who voiced that opinion dumb. That may or may not be the case :). The origin of the opinion may simply be giving his honest opinion while not armed with all the facts, from his own point of view. However it still stands that from the perspective of a more informed stand point, the opinion itself can be dumb.
-
At least when you test something before it goes out the door you know it will work.
From the context I assume you are trying to say that when you send hardware out the door you can know that it will work. Was that intended as a joke?
Let me be clearer. The design functions. You build it, you test it, you ship it. Unless it breaks, it will function as designed. If it breaks, it's broken.
Which reminds me about a previous attack on software, this concept that software should be "done right the first time" and not have any of this non-sense updates. This works if you fix the environment. The software will always do what it was intended to do. It doesn't age, it's caps don't dry out, give it the same environment and it will run as designed and shipped forever. But that never happens and expecting your 0 gauge train to run your new 1/4 gauge tracks is rather short sighted.
In software we get this type of support ticket all the time:
"I took your 8 bit shift register and put it in a socket for a 16 bit one and it didn't work!"
Of course, it can have genuine bugs. But so can hardware.
I think most EEs just don't grasp the actual complexity of most software (to be fair, many SEs don't either). I'm not talking about a writing a little firmware to blink lights, drive steppers, etc. I'm talking about real software that takes a large team to write, is 50,000 lines+ (often into the 100,000s or even millions), interacts with many different pieces of hardware and/or other software and services, etc.
It's not reasonable to expect that 100,000 lines of software will ever be perfect. The best you can do is what we do with safety critical software, and even this takes an order of magnitude more time and resources than normal software (and often more). You can test the software to the software requirement and show that it does, indeed, satisfy the requirements, and that it does nothing that operates outside of the requirements. It's enormously time consuming and expensive, but we do it all the time. That's how cars and airplanes, running millions of lines of software, operate day in and day out without killing people.
It still doesn't address errors in the requirements. Garbage in, garbage out.
But comparing electrical engineers and software engineers is a pointless exercise. I'm trained as an SE, but I've played on both sides at one time or another. EEs have their own problems to contend with that makes the job difficult and requires creativity and knowledge to solve, but in terms of sheer complexity there is really nothing at the board level that EEs do that come close to even simple software projects. I'm not talking about how hard the job is, or how much intelligence or knowledge you need, or anything like that. I'm just talking complexity in terms of connections and logic. Most boards have very little logic on them, and for good reason.
The simple truth is that it's FAR more efficient to build the electronics as simply as possible, and bury as much complexity as possible in the software/firmware. So to answer the OP, if you're going to be a modern EE, you'd better know how to write some software because most of your boards that do something interesting will have programmable components on them, and the best EEs will be able to test their boards and maybe even write some simple driver software, just like the best SEs (at least low level guys) are able to do basic hardware/electrical debugging and rework. And if it's a really simple project, maybe you just do the whole thing yourself. A lot of times, they either get bored, the EE work drys up or they just decide they like software better, and boom...they're doing software. I've seen it happens many, many times.
And for those who think there's a "push it out the door" mentality, or some other nonsense going on in the software industry, you can whine all you want but by the time you wake up in the morning, microwave your oatmeal, watch the news, check EEVBlog and drive to work, you've already interacted with software that extends for hundreds of millions of lines. HUNDREDS of millions of lines. How many bugs did you find? Overall, I think that's pretty damn impressive.
-
Top down design is less useful these days. Bottom up is sometimes more useful.
Certainly in the OOP paradigm software isn't really seen as a hierarchy of operations that break down from very high level blocks into finer and finer details. That technique dates back to JSP (Jackson Structured Programming) but it becomes brittle and confusing when considering asynchronous operations and concurrency and given that a lot of code written today has some form of network in the mix or multi-threading, top down design is awkward and doesn't really portray the neatness and understandably suggested.
In OOP sofware is seen more as a peer network on collaborating "objects". Objects should be encapsulated and private about their details. They offer a service via a contract (interface) and as long as they full fill that contract everything is grand.
Objects are often then grafted into design pattern building blocks. Design patterns being the results of academic analysis of common programming problems. An example is the Factory Pattern which allows you to decouple the implementation of an interface from it's actual code, so the later can be swapped at run time or at least without code change. This concept is then adapted and extended to give us "Dependency Injection" to meet the "Inversion of control" or "Inverse dependency" structures, where classes/objects do not depend on fixed individual collaborating instance peers, but the outside code provides the implementation to the object. Hence "dependency injection". It is highly useful for creating test harnesses or stubbing out components with fixed known quantities to isolate test scope. A popular example would be replacing a DAO (Data Access Object) with one that returns hard coded values, rather than test the database as well (that would be done elsewhere).
So, maybe you can come to see that the question: "Where is the top?" is highly ambiguous and not very easy to answer. There are many tops. Just following from the "int main()" method down will not lead you to all entry points either.
Certainly, "Top down design" is a useful tool in the box and would be used when breaking down a complex operation or behaviour, but it's just one tool. Bottom up design, test driven development, object analysis, dictionary analysis, flow charts, sequence diagrams... it's a rather big tool box.
Well said and I see your point. If the software is more along the lines of a ROM state machine then what is input and what is output is blurred. A bunch of knee jerk bottom up instructions that are state of system dependent. My hat goes off to you. I would not enjoy writing that type of code where the friendly structure of input , process , output is lost. I want the safe shores of the Jackson world where my noodle can think with some clarity.
-
Bottom up is sometimes used for OOP Design by starting with individual objects, behaviours and how they interact and building up the network of peers gradually, that then gives you ideas for your overall code architecture. I don't think a lot of OOP developers think about it that way. Maybe some of the more academically learned ones. It's like designing a schematic by dropping the "main chip" in the centre and spreading the schematic out around it and making overall board layout, PSU etc as it presents itself to you.
The other "bottom up" concept is in analysis. A taught, but seldom used water fall analysis method in OOP is to get a written requirement summary text from a good business analyst and the customer. It's often in shared language, which is just customer domain language but with strict terms definitions. You then look for nouns and verbs and pull them out as objects and methods/behaviours, being careful to avoid "System" entities. These form sentences in natural language, The requirement "A document can printed." Translates almost directly to a Document.print() method. These then get rattled around UML design software, almost like an EDA program. Associations, collaborations and aggregations are linked out between them, so called class diagrams. You can then generate object diagrams to show actual instances working together and then sequence diagrams to show the individual collaborations. So you are starting with the core bottom objects and building up to seeing the overall picture. On a big design you could have hundreds of views of the same underlying model. Kind of like a 100+ page schematic. These tools will then even go as far as generating your empty code files... or reverse engineering existing class files back to UML models.
Thankfully I have not had to lead or be that senior in such a large scale design, few do. Most of the stuff I have worked in has been dividable into sets of much smaller components that interact via the database. State machine like, but just moving data through a pipeline sequence. Picking up data in one state, processing and putting it back one step further on. Small simple components makes work easier to divide up and give out to junior engineers and testers.
Today I find a whole load of software I have written or seen involves picking data up from one place, doing very little to it and putting it back somewhere else in a different format. It's quite tedious. The analogy is reading a UART line and sending barely altered, slightly processed, SPI.
-
I don't know if it's just me, but I've observed that a lot of electronics engineers (either recent graduates or not) seem to end up in jobs where the majority of their work involves coding. I'm not talking about just firmware, but even getting into application development, Python, web development...anything not involving electronics. Some of my uni friends since graduating, whilst trying to get into electronics design, a lot of them ended up working as software engineers. Some of them even started in an electronics design role, but transitioned over into programming. Is there a reason for this? Is electronics as a job becoming almost 'obsolete' since there's so much we can do with code when it comes to an ARM board or an FPGA, that designing electronics isn't as highly valued anymore, where you can just use a stock standard circuit to get what you want done? Is electronics the sort of field that runs the risk of being out sourced to another country?
I work as an electronics/hardware engineer for the record, and I enjoy it, but I also find myself wanting to get involved with coding a lot more lately.
Because we got bored listening to our software colegues going on about how complicated there jobs were :-DD
-
To be fair most of the really complicated things I've seen in the software sector were founded on the shoulders of massive fuck ups, usually by inexperienced engineers.
-
I've seen a lot of hardware designs that followed the same methodology. You'd be a fool to think that there are no other fools in any given discipline.
Any fool can mash things together at random; it takes wisdom to optimize a design, to find the insight that allows reduction.
Tim
-
Today I find a whole load of software I have written or seen involves picking data up from one place, doing very little to it and putting it back somewhere else in a different format.
More or less, this is THE software problem that always has to be solved. Whether you do a lot with the data or a little, organizing and managing the data is what makes or breaks a design. Every major problem I've ever run into with a software design is that I have data HERE, it needs to get THERE, and there's no good way of getting it there. Most everything else is child's play. The nine pound hammer for fixing this is to simply make everything global, and for tiny projects that's actually a great solution, but for anything more than flipping a few bits here and there that gets out of hand almost immediately.
Whenever I start a new project, I spend a little bit of timing playing with any new technologies or risk areas, and then I dive into defining my data types and getting the data paths correct. It boring and it's difficult...it requires looking at the entire system as a whole. But once that's in place, everything else always seems to design itself.
-
Similar thing, but I was talking about actual "databases". Consider an optical network management tool. Pick data up from a incoming SNMP scanner, form it into entities and put it in a database. Another part picks it up from the database, forms it into JSON and sends it to the front end via a REST API. The front end consumes the JSON and create an Angular scope, the angular scope fills in the blanks in HTML.
I think the problem you are describing is moving data around in memory. If a function in the Nth layer needs the customers locale to calculate tax on an order item, how do you get the data down through all N layers of function calls.
This problem is different in the OOP world. The TAXCalculator object will be provided a reference to another object that knows where to get the customers locale. That object will be injected by configuration. As an back-of-envelope example:
class TaxCalculator {
@Inject
private CustomerManager custManager;
public void calculateTax(List<OrderItem> orderItems) {
Customer cust = custManager.getCurrentCustomer();
Locale custLocale;
if( cust != null ) {
custLocale = cust.getLocale();
} else {
// panic and run round the room with your hands in the air.
}
for( OrderItem orderItem : orderItems ) {
orderItem.calculateTax(custLocale);
}
}
}
interface CustomerMangager {
public Customer getCurrentCustomer();
}
interface Customer {
public Locale getLocale();
}
Depenency injection is fairly cool but it can be overused and can end up with very messy config. However it allows you to profile the sofware and change inner components from config. It also allows you to wire things directly into the Nth layer without having to pass the data all the way there, or make it global.
Remember in the OOP world the data and the code/behaviours are encapsulated together. You don't consider data/variables and functions as separate things. There are no such thing as data tables etc.
-
Similar thing, but I was talking about actual "databases". Consider an optical network management tool. Pick data up from a incoming SNMP scanner, form it into entities and put it in a database. Another part picks it up from the database, forms it into JSON and sends it to the front end via a REST API. The front end consumes the JSON and create an Angular scope, the angular scope fills in the blanks in HTML.
I think the problem you are describing is moving data around in memory. If a function in the Nth layer needs the customers locale to calculate tax on an order item, how do you get the data down through all N layers of function calls.
This problem is different in the OOP world. The TAXCalculator object will be provided a reference to another object that knows where to get the customers locale. That object will be injected by configuration. As an back-of-envelope example:
class TaxCalculator {
@Inject
private CustomerManager custManager;
public void calculateTax(List<OrderItem> orderItems) {
Customer cust = custManager.getCurrentCustomer();
Locale custLocale;
if( cust != null ) {
custLocale = cust.getLocale();
} else {
// panic and run round the room with your hands in the air.
}
for( OrderItem orderItem : orderItems ) {
orderItem.calculateTax(custLocale);
}
}
}
interface CustomerMangager {
public Customer getCurrentCustomer();
}
interface Customer {
public Locale getLocale();
}
Depenency injection is fairly cool but it can be overused and can end up with very messy config. However it allows you to profile the sofware and change inner components from config. It also allows you to wire things directly into the Nth layer without having to pass the data all the way there, or make it global.
Remember in the OOP world the data and the code/behaviours are encapsulated together. You don't consider data/variables and functions as separate things. There are no such thing as data tables etc.
With C++/C#, the way I would generally solve more complex problems is at configuration time, with class factories and named objects. At initialization time, everyone just finds whatever objects they need. If Object A needs Object B, it just have a reference to a particular object B's name. But it's one more thing to build and debug.
Fortunately C++ and C# have all sorts of tools to make this easy. In the dark ages (20 years ago) there was an awful lot of stuff you had to build in C++ to make this work.
-
Depenency injection is fairly cool but it can be overused and can end up with very messy config. However it allows you to profile the sofware and change inner components from config. It also allows you to wire things directly into the Nth layer without having to pass the data all the way there, or make it global.
And that config is normally in a different language (e.g.XML) to the language used for the components (e.g. Java). That buggers the IDE's ability to tell you "what objects are invoked this object" and "what objects can invoke this object" (where object == instance of this class!).
If you think about it, DI is little more than a replication of the hardware way of thinking: a cabinet "wires together" several racks, a rack "wires together" several PCBs, a PCB "wires together" several ICs, an IC "wires together" several transistors.
If you then allow the (top level) cabinet creator to have direct access to the (inner components) config registers in a IC, you have the starting point for some config hell and interesting product liability discussions! "...but your system installer configed the IC's outputs to be HVCMOS not LVDS..."
TANSTAAFL. Pick your failure modes.
-
With C++/C#, the way I would generally solve more complex problems is at configuration time, with class factories and named objects. At initialization time, everyone just finds whatever objects they need. If Object A needs Object B, it just have a reference to a particular object B's name. But it's one more thing to build and debug.
More or less the same thing. I have used named object factories in C++ before.
In some enterprise setups the objects might not be local. Things like Corba, EJB or Microservice architecture the objects or services might be remote. You use a registry service to find them. You can even plug that into DNS and DHCP and let your objects go roaming around the world.
You can see this can be a nightmare to debug though :)
-
Generally software development these days is running into an interesting era where so many things have been done before and open sourced that often the first thing you do is decide what framework or engine you will use as the basic architecture. Say for Java that might be "Spring" and the relevant, to your project, modules and sub-frameworks.
You then hang your stuff off that framework, hoping you don't run into any, "What do you mean I can't do X?" problems.
The downside with this approach is you get "Sledge hammer to crack a nut" everywhere. Enterprise Java programmers become obsessed with reusing frameworks and design patterns out of their tool box or JAR library for everything. So they end up solving simple problems with design patterns created for much more complicated problems. This makes the software a lot more complex than it needed to be and a lot harder to support.
Completely off topic, but my day just took a turn for the worst:
internal compiler error: Segmentation fault
That's not going to be good.
-
Just a point earlier about dependency injection. You usually don’t configure it these days. It configures itself! You just write objects and use them. You only ask the container for an object at the entry point.
Frameworks I’m a fan of. Because someone else has done the hard work. I want to deliver business value and that doesn’t come from writing glue.
Enjoy your compiler error. I’m trying to persuade FreeRTOS to target something on ARM. I am considering giving up and moving the hardware to PIC24 and writing it in assembly.
-
Just a point earlier about dependency injection. You usually don’t configure it these days. It configures itself! You just write objects and use them. You only ask the container for an object at the entry point.
Frameworks I’m a fan of. Because someone else has done the hard work. I want to deliver business value and that doesn’t come from writing glue.
Enjoy your compiler error. I’m trying to persuade FreeRTOS to target something on ARM. I am considering giving up and moving the hardware to PIC24 and writing it in assembly.
Yep and you only need to tell the DI framework which type you want if there is more than one. If it's fixed you can usually do:
@Inject(qualifier="ThatOne")
Of course for unit tests or integration tests you can override the qualifiers, I believe.
My bug bear is with people overusing complicated problem solving design patterns to solve simple problems.
I once worked on a project as a low latency C++ engineer and when the last person in a Java project left they dumped the project on my desk to support. A behemoth of a java app that used every bell and whistle they could. There was even dozens of examples of "Hobby Horsing", where they had used simple problems to delibrately design complex solutions to "play" with them.
My favourite was the double nested visitor pattern to select from 4 order types and 3 time limits on orders. Visitor pattern is great for large diverse object lists with dozens or types that can be "plugged" at run time, so you don't know the types. Not for a matrix of 12 known combinations.
Some parts of the software if you CTRL+Clicked a method, Eclipse would freeze for nearly a minute and present you with a list of 30 or 40 implementations. The logic to select which one was a rat's nest of patterns. In the end I used a break point and stepped in, but that required I build up a complicated test harness to replicate the original issue.
I called the developer many, many names those days.
There are people I know who would frown on any of the following:
if(...){
} else if (...) {
}
switch() { // anything }
if( a instanceof B ) {
}
"Do it with polymorphism and patterns man if statements are so brittle and instanceof is an anti pattern!"
-
Agree. That particular case is why I like the pattern matching stuff in c#. https://docs.microsoft.com/en-us/dotnet/csharp/pattern-matching
The OO side of things works best for small concerns like iteration and algorithms. The actual system is served better by passing value objects around and using the classes merely for decomposition and encapsulation of dependencies. Much less coupling if implementation doesn’t cross boundaries. Also it’s a hell of a lot easier to test a concern reliably if the implementation doesn’t fly around across all isolation boundaries.
-
Bottom up is sometimes used for OOP Design by starting with individual objects, behaviours and how they interact and building up the network of peers gradually, that then gives you ideas for your overall code architecture. I don't think a lot of OOP developers think about it that way. Maybe some of the more academically learned ones. It's like designing a schematic by dropping the "main chip" in the centre and spreading the schematic out around it and making overall board layout, PSU etc as it presents itself to you.
The other "bottom up" concept is in analysis. A taught, but seldom used water fall analysis method in OOP is to get a written requirement summary text from a good business analyst and the customer. It's often in shared language, which is just customer domain language but with strict terms definitions. You then look for nouns and verbs and pull them out as objects and methods/behaviours, being careful to avoid "System" entities. These form sentences in natural language, The requirement "A document can printed." Translates almost directly to a Document.print() method. These then get rattled around UML design software, almost like an EDA program. Associations, collaborations and aggregations are linked out between them, so called class diagrams. You can then generate object diagrams to show actual instances working together and then sequence diagrams to show the individual collaborations. So you are starting with the core bottom objects and building up to seeing the overall picture. On a big design you could have hundreds of views of the same underlying model. Kind of like a 100+ page schematic. These tools will then even go as far as generating your empty code files... or reverse engineering existing class files back to UML models.
Thankfully I have not had to lead or be that senior in such a large scale design, few do. Most of the stuff I have worked in has been dividable into sets of much smaller components that interact via the database. State machine like, but just moving data through a pipeline sequence. Picking up data in one state, processing and putting it back one step further on. Small simple components makes work easier to divide up and give out to junior engineers and testers.
Today I find a whole load of software I have written or seen involves picking data up from one place, doing very little to it and putting it back somewhere else in a different format. It's quite tedious. The analogy is reading a UART line and sending barely altered, slightly processed, SPI.
Speaking of changing data formats I am surprised it works at times. I suspect you can shine some light on this. am example. I have a micro that only knows how to communicate in RS232. Other than 232 standards , 3 wire in this case , the mirco is as dumb as a bag of nails. Unfortunately third parties want the micro on their network with a proper IP address. To further complicate things they want to run fiber as the distance is over 300 meter just outside copper standards. The end result is RS 232 to patch cable adapter and then a patch cable to fiber adapter. It does not end their as when arriving it has to do these magic trick in reverse with fiber to patch cable adapter followed by patch cable to 232 standard. This whole process is transparent to the dumb as a bag of nails micro who thinks he is talking to a 232 interface. I have done this a few times but when sitting back and thinking of all the pins that had to be put in place just right the pull off the magic is impressive. A 232 is limited to 50 feet according to it's standard. In real life 232 will go well over 300 meters at a 9600 baud. In short the patch and fiber adapters were not needed as 232 direct will get your there. The only reason for all the adapters with their magic tricks was the politics of doing it by the book.
-
Relevant:
-
Seems to me that the trend towards using more comprehensive frameworks has a downside: the learning curve.
For example, if you take some test instrument from the 80's or 90's, getting it to work on GPIB is usually a very quick job, sometimes the characters you need to send to the instrument to persuade it to do simple things is printed on a sticker under the device... within a few hours, you can set up and automate a reasonably complex test environment just using single character ASCII commands.
Compare that to modern products, where you have to jump through a few more hoops to get anything out of it (the flip side, of course, is that once you do, it can do a lot more).
Perhaps there is room in the world for both approaches?
-
^-^
-
That's apart from us engineers who were intelligent enough to work out there was more money cruising along with one eye shut and slacking 3 days a week writing software instead ;)
-
^-^
The job definitions are a bit dated. "Programmers" are people who type code in based on a VERY, VERY detailed spec. They do not make design decisions or actually engineer solutions. Software Engineer is the more well paid title for those who do the later as well as write code.
The analogue with a CNC machine should make it more obvious. The design engineer works out all the settings required, the programmer just types that into the machine.
The term is very dated to the days when we didn't have new real time, code, run, test cycle on development machines.
-
I personally like to think there is a difference "developers" and "engineers", although in some cases the region is very unclear. In college, the software development I see is very mediocre to autistic to excellent. And then there is a course called "Advanced Programming", intended for people who don't know how to program and need to learn it; which in my opinion was a bit deceiving.
I think one further interesting point to make about unit testing is that it also gives room to deviate from only happy path testing. What does my software do when a certain collection does not contain all arguments to get started (e.g. some input JSON data)? Or some day you spot a bug that highlighted some faulty dependence between A and event B; let's write a test that reproduces this fault, and make sure it stays fixed in future revisions.
Although EE and SW are quite different fields; I do honestly think that both share some merits. It's relatively easy to get some piece of software to function and do a job. But to get it reliable, be safe, pass all regulations and maintainable is a completely what makes it challenging.
For example: in electronics it's also relatively easy to get something functional. But then optimizing for cost, power consumption, passing CE (electrical safety, isolation, EMI, ESD), temperature cycling, hot plugging power and connections, deploying at customer while offering solutions for remote testing and debugging.. So much extra "engineering" left to do from the core concept..
I think this also mostly separates hobby from "work". I like playing around with technology and concepts. But to some degree the "engineering aspect" to me is just work.. a grind. I don't mind it, but I do notice that in hobby projects I tend to lose interest once I reach some of the first milestones I had set, as it feels like I've accomplished what I set out to do. I would probably need to double the time spend in order to make it usable and stable on a day to day basis. But for "tinkering" this suffices. Not if you deploy a piece of hardware or software on the other side of the world, and the product needs to work because it's use is only relevant for 2 weeks per year.
-
Here's my 2 cents: Harware or software (or firmware which is kinda between those 2), they are just method to solve problems. No matter what you do, in the end it all is about implementing idea and solving problem. I am an undergraduate EE, but I do a tons of software, coding in C#, using python to automate some boring task, java to build android app, verilog to describe logic chip, etc... And I also am capable of using register, and understand hardware clearly, which is one thing a lot of EE nowaday can't (library... yuck). Most people end up coding because they're scare of hardware. When you screw up doing software, you can always roll back, having backup, or maybe fix the damn bug in the last second. No such thing happen with hardware. Also hardware require tools, experience, and a moutain of patience (ever see a badly connect test lead? Guarantee to make one get mad)
-
(ever see a badly connect test lead? Guarantee to make one get mad)
Ever do this:
if( foo = bar ) {
doSomethingThatNeverHappens();
}
And scratch your head, curse, rewrite half the unit of work and only then do you spot the missing =. DOH! Java at least doesn't premit this syntax an assignment is not considered a boolean expression.
-
I do those intentionally to confuse people sometimes :D
-
I do those intentionally to confuse people sometimes :D
Yes, smart arse coding can be fun. My favourite for messing with people in C is to use roll overs or precision loses as a feature.
That and using raw logic expressions instead of if statements. This is common in shell and perl, but less so in HLLs.
doSomething() && reportSuccess() || reportFailure() && DEBUG && outputDiagnostics();
It is however considered bad practice though. A lot of high level languges will give you an error which paraphrases as "Please don't be a dick."
-
Actually I think this is one difference between software and electrical engineering. If the funky logic expression above works in electronics, no matter how much rigor your need to understand why, regarding operator precidence and logical sequence evaluation, then it will be used.
In software there are perfectly functional and legal code syntax and techniques that we would avoid because they make the code harder to understand for little or no gain.
I have many times in my career called out a bit of code for being "smug" or "smart arse" and in my more senior role today, unless the developer can justify the need for it to be that way I would have it rewritten more explicitly and readable.
-
Actually I think this is one difference between software and electrical engineering. If the funky logic expression above works in electronics, no matter how much rigor your need to understand why, regarding operator precidence and logical sequence evaluation, then it will be used.
I slightly disagree -- there are exceptions:
If you expect your design to pass review, don't be a dick. ;D Typical environment being, one pass, design it and build it, and be done. A lot of lower quantity products are designed this way, and a lot of contract design is done this way. It's just cheaper overall (lower risk / rework / respin) to execute a canonical, RTL-level (if you will) design.
If you feel there's a strong motivation to optimize the design, early (warning signs: early optimization!), at the expense of clarity, you'd better write a description to make it clear how your ratsnest is supposed to work. Even so, this is risky, as a lot of engineers without experience in such circuits, will tend to take your description at face value, and not try to check it themselves. Result: little actual review accomplished. (Bonus points for running a simulation, though.)
Whereas, if the design is incremental (improving on an existing design, optimizing, cost reducing), and typically in higher quantity production (so that the savings is worthwhile), you can get into cheaper parts and more quirky circuits. You probably also have the luxury of running a prototype, so that the design review need not be as strict, and so that the engineers involved can all get a feel for how the circuit works, on the bench.
Tim