Electronics > Microcontrollers

Is ST Cube IDE a piece of buggy crap?

<< < (248/252) > >>

paulca:

--- Quote from: Siwastaja on September 13, 2024, 08:07:04 am ---You realize it is impossible to fully test any non-trivial piece of software due to sheer number of combinations when you add state and interactions (not even talking about timing)? Your actual test coverage is like 0.00000000001% even when you think it's 99%.

--- End quote ---

Actually those number are totally quantifiable.

"Coverage" is a term only used for testing unit sized chunks of code.  (Sub circuits).
We would run a full suite of unit tests which generate a report on "coverage".  It's not based on "Lines of code" alone.  It's also based on cyclomatic complexity coverage.  It will raise warnings on untested paths through conditions and exception paths.

When you are testing a unit of code which is 10 lines long and has 3 branching statements you can test this pretty close to 100% with only a dozen tests.  Your negative tests ensure that it meets it's contract when given invalid state or data.

You have 100 units?  You need 1200 tests?  Sounds about right, yes.  Maybe a bit high, hopefully not all have that high a CC.  It is hard though.  In "hard core" stuff, a method that takes 3 arguments, immediately has a CC of 6 as you MUST pre-condition your inputs.  You have 3 states which can be valid or invalid.  So the "un-optimized" CC is 6.  Then you have to validate your output (return value), this will make it at least 8.  When you are working in a code base that will fails the build if you hit a CC over 20 in a method, it can get a bit tight.  There are ways to refactor compound expressions to lower the CC by cancelling paths out.

Then we have integration tests, which more or less do the same thing, but specifically on the "intergation points" or "external contracts".  They differ in that these tests typically have to construct their harness, run it, populate it with a "known state", run a dozen tests and then destroy it again, for the next test along to rebuild it to a known consistent start state again.  These can often end up taking hours and are usually not run for every build.  They would be run when you wish to raise a pull request upstream, say.

The thing is.  We are still in the "Development Estimate" here.  "Testing" hasn't started.  Still in the 20% of effort.  One of the opening requirements for development to hand over to the test teams is that the Unit test and Integration test suite is reviewed and appropriate.

Then the real testing begins.  Not bottom up  testing, but top down testing.  There are many, many test strategies, tools, automation available to them.

There is an enterprise software thing which gets bashed a lot in one-man-bad, <10klc projects a lot and is usually referred to as "bloat".

Dependency chains.  If I need to sort a list or set of data, I could just write the most basic and appropriate sort routine right there in the code file.  It would probably end up a much shorter and more efficient implementation than the "canned generic, ones size fits all", framework sort routines available publicly open source.  (When it comes to things like maths and sorting, it is likely the off-the-shelf is a lot faster).  However, the moment you do write that, you now own the testing of it.

New code = bugs.
Pre-Tested code = less bugs.

The open source library containing the "accepted, defacto set of data manipulation function" you want, is probably downloaded by a million people every single day.  A subset of those will be pulling the source version and building it locally.  It contains a MASSIVE suite of tests.  However, it's not the tests necessarily that give the package "gravity", they only reassure us that the release qualities will be consistent, no, it is the millions and millions of software projects out there running this library, pumping trillions dollars around the world and if ONE of them should blow up because of this library, it WILL be raised in a matter of hours to the entire world.

So, when a junior asks me if they should just write the functionality instead of importing the library, it isn't really a "Functional" question.  It's a "non-functional" matter.  Non-functional requirements are a whole different ball game.  We need to consider ALL configuration items in the repo and justify each.  The pre-canned library was likely worked on by someone with far greater low level knowledge in the area than the junior before me.  It will come pre-tested, pre-validated and has millions of world wide use cases saying that it already works perfectly well and will not need tested by us.

In MCU world, your non-functional requirements are such to make most enterprises engineers comes out in sweats and want to go home.  When we calm down, we would just push the LEAST possible functionality into that non-functional regime.  Making it small, simple easy to develop, easy to test and thus cheap.  Then we would do the main bulk of the functionality somewhere else having the "micro" upload to the "macro" for processing.

No one size fits all though.  As suggested it often comes down to the non-functional requirements.

When someone tells you that your SLA, wire to wire, is sub 100uS.  When your SLA says you need to support 30k messages per second, per session and 40 R/W sessions.  That has a MASSIVE impact on how you go about things.

If the pre-canned C++ lib takes 23uS to sort the list and the bespoke, low-level C/inline ASM version takes 16uS.  It is easy to justify the extra testing effort to assure it.


elektryk:
Has anybody managed to run Cube IDE 16.1 on Win7?
Compilation goes ok but it can't write to MCU.
Cube Programmer doesn't even start but complains about Java, ST Link utility works but it doesn't support H5 series...

peter-h:
I recall testing 1.16.0 on win7-64 and it worked but the SWD debugger interface was broken, like it was on 1.15.0. The 1.15.1 reportedly fixed the debugger. This was much discussed on the ST forum (where ST basically deny everything, or don't respond).

However I can't help you much because 1.15.x installed GCC tools v12 which "breaks" various things - a few annoying things one needs to fix and then extensively re-test everything, and I could not be bothered so I am staying with 1.14.1. If you need the .exe installer, PM me. ST keep only the .0 releases on their website ;)

AFAICT everything runs on win7-64 just fine, despite the release notes saying all kinds of stuff.

Don't waste your life chasing the latest version of this moving-target tool.

DavidAlfa:
You can just copy the compiler from any version you want, then set the toolchain in the project settings.

peter-h:
Yes I discovered that, but

- it is a hassle, especially as you then have to document the convoluted procedure for any future person working on the project
- GCC v12 has no advantage over v11
- any new warnings need to be chased down, perhaps a day's work, plus regression resting of all the product(s) ;)
- Cube v1.16 or v1.15 have no advantage over 1.14.1. Perhaps support for new chips... more relevant to Cube MX.


--- Quote ---Cube Programmer doesn't even start but complains about Java, ST Link utility works but it doesn't support H5 series...
--- End quote ---

There are some posts here about those tools. They have mostly been obsoleted by ST and indeed the latest versions have some Java issues. I think, see here
https://www.eevblog.com/forum/microcontrollers/production-loading-of-code-into-a-st-32f417/

Navigation

[0] Message Index

[#] Next page

[*] Previous page

There was an error while thanking
Thanking...
Go to full version
Powered by SMFPacks Advanced Attachments Uploader Mod