Author Topic: doing math, scaled up integer or float? and pre-processor stuff  (Read 6855 times)

0 Members and 1 Guest are viewing this topic.

Online mikerj

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: gb
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #50 on: September 24, 2021, 08:35:44 am »
I am working with the ADC on an AVR. I am trying to get a result in a number that makes sense so that defines can be written in say mV. So the easy sounding solution is to work in float arithmetic but my guess is that this will mean adding a library I may not have room for and lots of CPU cycles.

Alternatively I can work in scaled up 32 bit integers. To get the actual number in mV (or mA) bearing in mind any scaling due to say input resistor dividers I tried to do:

"ADC result" x "ADC reference voltage in mV" x "input stepdown ratio" / "ADC maximum count"

But this turns out inaccurate for low values due to the division. With 13 bits, a reference of 1177 and no hardware scaling anything less than a result of 7 will produce 0 and I will of course not be retaining my resolution as every 7 counts will produce just 1 mV/mA.


It's not "inaccurate", it's doing exactly what you have said you wanted i.e. a value with a resolution of 1mV.  A count of less than 7 is less than 1mV, so why would you expect a different result?

If your actual issue is that it's not rounding to the nearest mV value than this is easily fixed by adding half the denominator to the numerator e.g.


mV = ((ADC_result * ADC_Vref_mV * Input_div) + (2^13)/2) /  2^13
 

Offline SimonTopic starter

  • Global Moderator
  • *****
  • Posts: 17814
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #51 on: September 24, 2021, 11:46:34 am »
My issue was that I wanted a number expressed in mV not of mV
 

Offline Bassman59

  • Super Contributor
  • ***
  • Posts: 2501
  • Country: us
  • Yes, I do this for a living
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #52 on: September 24, 2021, 03:22:59 pm »
My issue was that I wanted a number expressed in mV not of mV

I realize that I'm a native New Jersey American English Speaker, and not a Native Ol' Blighty English Speaker, but ...

I don't understand the difference between "number expressed IN mV" and "number of mV."
 

Offline SimonTopic starter

  • Global Moderator
  • *****
  • Posts: 17814
  • Country: gb
  • Did that just blow up? No? might work after all !!
    • Simon's Electronics
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #53 on: September 24, 2021, 04:22:05 pm »
A number expressed in mV could well be 0.15 mV, as opposed to the number "of" mV like 1, 2, 3.

I did not want to make life hard and used "V" as then I have defines like 3.5 (V) but 3500 (mV) is fine, it keeps the numbers I write in whole numbers, but I may still want to know the fractional part of mV.
 

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8168
  • Country: fi
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #54 on: September 24, 2021, 04:27:58 pm »
Just keep increasing the number of digits (and as a result, number of bits in the storage and calculation) until you are happy with the resolution. For example, use microvolts internally. Even if you need to go to int64_t types for calculation, storage, or both, the result is still way more efficient than software floating point library.

Although, if the data source is say 12-bit ADC, it's very hard to justify going beyond 32-bit datatypes unless you want a nearly endless string of meaningless numbers to be displayed.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14448
  • Country: fr
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #55 on: September 24, 2021, 04:47:31 pm »
A number expressed in mV could well be 0.15 mV, as opposed to the number "of" mV like 1, 2, 3.

I did not want to make life hard and used "V" as then I have defines like 3.5 (V) but 3500 (mV) is fine, it keeps the numbers I write in whole numbers, but I may still want to know the fractional part of mV.

Then use whatever sub-unit that gives you the expected resolution.

For instance, if you need 2 decimals (which means, a resolution of 10 µV), then use integers that express the value in units of 10 µV. For 0.15 mV, that would be the number '15'. For say 10.55 mV, that would obviously be '1055'. End of the story.

If you need to display the values in mV, the routine is trivial. Just use the same routine as if you were displaying an integer, but insert a decimal dot between the fractional part and the integer part. The routine just needs to count the number of decimals.

 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21661
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #56 on: September 24, 2021, 05:00:20 pm »
That is, after all, all that floating point does, it just tracks the decimal for you.  You have a number at some scale, and simply put the decimal where you need to when printing it out (or comparing to numbers at other scales, etc.).

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: Bassman59

Offline Siwastaja

  • Super Contributor
  • ***
  • Posts: 8168
  • Country: fi
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #57 on: September 25, 2021, 09:35:59 am »
And if you just feel lazy and non-pedantic-y...

Just use like microvolts, microamps and int64_t for everything, don't bother checking any ranges, and be done with it.

It's highly unlikely you overflow anything in practice, and you also have enough resolution for nearly all practical purposes.

Yet int64_t is far more efficient than software floating point. Bloat due to laziness, but not nearly as much as with float, which is the ultimate solution for the most lazy.

I can't say I recommend this, though. I recommend always checking ranges and making sure worst-case numbers fit in the datatypes, each and every time, because that saves time in the long run, and enables you to get rid of oversizing "just in case".
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: fi
    • My home page and email address
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #58 on: September 25, 2021, 09:06:26 pm »
If you need to display the values in mV, the routine is trivial. Just use the same routine as if you were displaying an integer, but insert a decimal dot between the fractional part and the integer part. The routine just needs to count the number of decimals.
Or use something like the internal_voltage_units_to_string() I showed in my post above.  All it needs is a comparison and subtraction (in a loop) by a constant (that matches 1.0 in the fixed point format), and multiplication by ten.  It's not the fastest way to do it, but on devices like AVRs, it is quite small and efficient overall.

In general, you'll want to represent real numbers X using an integer V and a fixed nonnegative integer power of ten P, via
    X = V × 10-P = V / 10P
so that the inverse is
    V = X × 10P
noting that since P is a constant, 10P is a constant as well.

(The conversion function, using the multiply-by-ten-then-subtract algorithm shown in my earlier post, must use an unsigned integer type that can describe 0 to 10×10P-1 = 10P+1-1, inclusive.)

Since the ADC typically produces values that are not directly a power of ten of the measured quantity, mapping the ADC reading X to suitable real V, linearly, as shown in my previous post, achieves this in an easily configured (even runtime-configurable) manner.  Then, the only place where you really see 10P is in the fixed-point-(or-internal)-to-string function.

If your ADC produces values 0 to 4096, and the measured quantity is 0.0000 V to 5.0000 V, map ADC 0..4096 to 0..50000, and treat the latter as P=4, i.e. print the value using five digits, zero-padded, and insert the decimal point just after the leftmost (most significant) digit.
If the measured quantity is 0.0 mV to 5000.0 mV, map ADC 0..4096 to 0..50000, and treat the latter as P=1 (10¹ = 10), i.e. decimal point just before the rightmost (least significant) digit.
« Last Edit: September 25, 2021, 09:08:44 pm by Nominal Animal »
 

Online mikerj

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: gb
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #59 on: September 26, 2021, 08:51:41 pm »
My issue was that I wanted a number expressed in mV not of mV

Adjust your scaling factor so you store the value in 0.125 mV units, which is just above the resolution that the raw ADC value has and also can be stored in just three bits.  When it comes to displaying the value, the factional part is in the lowest three bits, the integer part is in the upper bits. e.g.

Code: [Select]
printf("%u.%u\n", myval>>3, (myval & 0x0007) * 125);  /* assumes an unsigned value */

If you want something which is more convenient for display space such as 0.1mV units you can do that to, though extracting the integer and fractional parts for display will require either a division/subtraction operation  or a modulo operation.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: fi
    • My home page and email address
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #60 on: September 27, 2021, 08:51:43 am »
If you want something which is more convenient for display space such as 0.1mV units you can do that to, though extracting the integer and fractional parts for display will require either a division/subtraction operation  or a modulo operation.
The division is by fractional one, i.e. by 8 (which optimizes to a bit shift); and for the fractional part, one can use a table lookup, too.

For 1/8, the fractions are 0.0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, and 0.875.  Rounding to a single digit, one can use .0, 0.1, .3, .4, .5, .6, .8, and .9 .

The division cannot really be avoided (although can be calculated for a fixed divisor using a double-width multiplication); in general, this works for any kind of integer fractions (say, where 27 = 1.0, 54 = 2.0, and 1 = 1/27 ≃ 0.037).
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14184
  • Country: de
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #61 on: September 27, 2021, 09:30:29 am »
For the output one can avoid the fractional part by using integers relative to the last digit and than add the decimal point in the output procedure (e.g. itoa variation).  The usual itoa will however include a division.

If really needed one can avoid the division in the itoa part by doing the conversion from the other end, though this is not common:
The digit steps are with a multiplication by 10 and than take away the top digit (e.g. upper 4 bits) for output. This naturally works for fractional number, but could also work with scaled integers. It needs a single long multiplication for the scale factor and no extra division.
Especially for a weak CPU with no HW divide this can be quite a bit faster (and shorter), especially if one has a general scale factor anyway.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: fi
    • My home page and email address
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #62 on: September 27, 2021, 02:05:09 pm »
If really needed one can avoid the division in the itoa part by doing the conversion from the other end, though this is not common:
I showed this in #49 internal_voltage_units_to_string().  The only operations needed is an unsigned integer multiply by 10, and (repeated) comparison and subtraction to 100000 (which corresponds to 1.0 in the fixed-point units chosen in that post).  That can be adapted to any other fixed point format.
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14448
  • Country: fr
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #63 on: September 27, 2021, 05:08:08 pm »
If really needed one can avoid the division in the itoa part by doing the conversion from the other end, though this is not common:
I showed this in #49 internal_voltage_units_to_string().  The only operations needed is an unsigned integer multiply by 10, and (repeated) comparison and subtraction to 100000 (which corresponds to 1.0 in the fixed-point units chosen in that post).  That can be adapted to any other fixed point format.

Yep.

Now while itoa()-like functions would usually mostly be used (or at least, should mostly be used for this) for display purposes, it's pretty rare that even a software-based division, handled by the compiler, would be a bottleneck, as display functions themselves are likely to consume a lot more time. Just a thought. As always, optimize when necessary, but don't overdo it.
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: fi
    • My home page and email address
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #64 on: September 27, 2021, 10:40:03 pm »
As always, optimize when necessary, but don't overdo it.
Very true; my goal for showing that implementation was to show how to do it lightweight on an AVR.  (I do realize it was a bit too deep to fit in the discussion; apologies to everyone!)

As discussed in the Constructing short C strings ... thread and elsewhere, there are lots of ways to do the conversion, some of them simple, some of them efficient, and many both (but exactly which ones depends on the exact hardware architecture).  Even readable pure-standard C implementations are fast enough if used only say few hundred times per second or less.

Let me retry my original attempt from a different tack.  Hopefully, this will be easier to understand for everyone than my previous descriptions.

Any real value V can be approximated using an integer X in fixed point format using a constant factor F,
    XV × F
    VX / F

For some use cases, you want a power of ten F, for other use cases a power of two F, and for others an arbitrary, possibly even rational or irrational non-integer F.
Power of ten F is nice for display purposes; conversion to string is fast, but arithmetic operations (except addition and subtraction) slower.
Power of two F is nice for computation; conversion to string is slower, but arithmetic fast: integer speed except for a binary right-shift after each integer arithmetic multiplication or division.
Non-integer F are rare, and mostly used in special cases.  (Even X≃V², V≃sqrt(X), is more common, for example in computations involving 2D (lattice) Euclidean distances.)

Simon wants X to be human-readable in mV, but with at least one digit of precision.  The obvious solution is to have X=1 represent V=0.1mV, i.e. F=10 [mV]⁻¹.  Then, 5.0 V = 5 000 mV corresponds to X=50 000.  An uint16_t type X can then represent voltages between 0 and 6.5535 mV = 6 553.5 mV; useful on 8-bit architectures like AVRs.

However, the ADC in a microcontroller provides a ratiometric value, usually with some power of two having a specific voltage, and zero at zero volts (although the ADC might have to be biased using e.g. an opamp circuit if the interesting voltage range does not include zero).  This we solve by linearly mapping the ADC range to the correct X range, using exact integer math (because both are integers); this is the rational F case, F=N/D where N is the range (number of integer values) of X, and D is the range (number of integer values) of ADC results.  We only need to be careful to use integer types that are large enough to hold the intermediate integer values.

That mapping operation requires one multiplication and one division, and possibly an addition and/or subtraction (when one or both ranges do not start at zero).  If the full range of a typical binary ADC is used, multiplication by F is an integer multiplication (by N) followed by a binary right shift (to implement the division by power-of-two D), and the entire operation is rather fast.  (Also note that for runtime calibration, calibrating the V range – assigning correct voltage values to ADC zero reading and to the value the ADC compares to –, while keeping the ADC range D a constant power of two, means this efficiency is retained! Unfortunately, in practical circuits, it means we need to extrapolate the voltage from an ADC reading, rather than just store the ADC reading for known voltages, which can reduce the accuracy of the calibration.)

Using F=10 (for mV units) or F=10000 (for V units) makes display fast an easy, and does not affect addition or subtraction, but it does add an integer multiply-by-F when dividing by a voltage V, and an integer divide-by-F when multiplying by a voltage V.  If such arithmetic operations are rare compared to displaying the values as strings, then a power-of-ten-F makes sense.  If voltages are often used in multiplication and/or division, then using a power of two F would be more efficient, although then X would be in units of (some power of two) millivolts or volts.  (For tenth of millivolt precision, F=2³=8 would work, although F=2⁴=16 would allow all fractional digits to appear in the value; these two bracketing the tenth-of-a-millivolt precision.  Is octal human-readable? With F=8, X=01 (decimal 1) would refer to V=0.125, X=010 (decimal 8) to V=1.0, and say X=035 (decimal 29) to V=3.0+5/8=3.625.  However, X=07231 (decimal 3737) would refer to V=7×64+2×8+3+1/8=467.125, so I don't think larger octal values are that human-readable.)
« Last Edit: September 27, 2021, 10:47:15 pm by Nominal Animal »
 

Online Kleinstein

  • Super Contributor
  • ***
  • Posts: 14184
  • Country: de
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #65 on: September 28, 2021, 08:08:49 am »
Even with a binary factor for the frational part one can still do an efficient conversion to a ASCII format for the output. One just has to treat the whole number and fractional part seprate. The whole number part can use the normal atoi() method. For the fractional part one can multiply be 10 and that way get the ist digit after the decimal point as the new whole number part. Repeat that for each digit. This way can be even more efficient than the normal atoi() way (least significant digit first) used for whole number, just starting at the other end (most significant digit first).

With a 8 bit CPU this makes absolute sense if the shift for fractional is by a 8 bits multiple.

For the given original problem one usually does not need to the math and decision based on a scale to calculate with whole physical units like mV. The more logical way is to do the math with hardware related units, LSB (ADC steps) and maybe fractions of that and only convert to pysical units for output and input. Constants in the code (or better as a define at the top) can still be written in a readable way with an extra scale factor.
 

Online mikerj

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: gb
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #66 on: October 03, 2021, 10:30:14 am »
If you want something which is more convenient for display space such as 0.1mV units you can do that to, though extracting the integer and fractional parts for display will require either a division/subtraction operation  or a modulo operation.
The division is by fractional one, i.e. by 8 (which optimizes to a bit shift); and for the fractional part, one can use a table lookup, too.

For 1/8, the fractions are 0.0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, and 0.875.  Rounding to a single digit, one can use .0, 0.1, .3, .4, .5, .6, .8, and .9 .

The division cannot really be avoided (although can be calculated for a fixed divisor using a double-width multiplication); in general, this works for any kind of integer fractions (say, where 27 = 1.0, 54 = 2.0, and 1 = 1/27 ≃ 0.037).

Agreed. Many years ago when working primarily with PICs and AVRs I wrote a little app to produce C code for division/multiplication to a specified accuracy using only shifts and addition/subtraction, not sure what happened to it.  A little care is needed to ensure you don't introduce rounding errors by throwing away LSBs too early.
 

Offline David Hess

  • Super Contributor
  • ***
  • Posts: 16607
  • Country: us
  • DavidH
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #67 on: October 03, 2021, 08:56:35 pm »
Agreed. Many years ago when working primarily with PICs and AVRs I wrote a little app to produce C code for division/multiplication to a specified accuracy using only shifts and addition/subtraction, not sure what happened to it.  A little care is needed to ensure you don't introduce rounding errors by throwing away LSBs too early.

Long ago for PIC, I implemented the base-2 log and antilog routines from Knuth's book using only shifts and adds to do multiplies, divides, powers, and roots.  In practice I ended up leaving a lot of variables and constants in log form.

« Last Edit: October 03, 2021, 08:58:14 pm by David Hess »
 
The following users thanked this post: mikerj, Nominal Animal

Online mikerj

  • Super Contributor
  • ***
  • Posts: 3238
  • Country: gb
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #68 on: October 05, 2021, 07:10:27 pm »
Agreed. Many years ago when working primarily with PICs and AVRs I wrote a little app to produce C code for division/multiplication to a specified accuracy using only shifts and addition/subtraction, not sure what happened to it.  A little care is needed to ensure you don't introduce rounding errors by throwing away LSBs too early.

Long ago for PIC, I implemented the base-2 log and antilog routines from Knuth's book using only shifts and adds to do multiplies, divides, powers, and roots.  In practice I ended up leaving a lot of variables and constants in log form.

I wrote fixed point log/antilog functions for work maybe 15 years or so back, though it does use a small table as well as shift/add.  Pretty essential on small 8 bit micros when you need control loops etc. to run at a decent rate, but on modern 32 bit micros with a barrel shifter it means you hardly have to even think about the overhead of log operations.

I miss the heyday of the PICList with Scott Dattalo et al coaxing amazing functionality out of a handful of instructions.
« Last Edit: October 05, 2021, 07:12:20 pm by mikerj »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: fi
    • My home page and email address
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #69 on: October 05, 2021, 10:32:44 pm »
Background stuff:

It is interesting to consider how fixed point, floating point, and pure exponential form are related.

In fixed point, a real number r is approximated with an integer n via a scaling constant S:
    r ≃ nS

In floating point, a real number r is approximated with an integer mantissa m and an exponent p of basis B:
    r ≃ m×Bp
For nonzero finite numbers in IEEE-754 Binary32 ("float") and Binary64 ("double"), B = 2 and the most significant bit of m, always 1, is implicit (not stored in memory).

In pure exponential form, a (nonnegative) real number r is represented via v, its logarithm base B, typically in fixed point format as an integer n via a scaling constant S:
    v = logBr ≃ n S
    r = Bv ≃ BnS

In fixed point, addition and subtraction is trivial, but multiplication requires a division by S.  In floating point, none of the operations are really trivial, but there exist some really clever algorithms to reduce the number of sub-operations needed.  In pure exponential form, addition and subtraction requires at least an antilogarithm (exponentiation), but multiplication and division are trivial:
    r1×r2 ≃ B(n1+n2) S
    r1/r2 ≃ B(n1-n2) S

An additional form I've seen, is integer rational representation, using a numerator n and a denominator (or divisor) d:
    r ≃ n / d
The "annoying" feature of this is that one often needs to divide both n and d by their greatest common divisor (but note that GCD can be implemented as a binary algorithm that does one bit per iteration, and only does subtractions and binary shift right by one bit at a time).
This is useful when dealing with exact numbers (and no irrational constants), particularly when n and d are represented as arbitrary-size integers.

A somewhat related form is when doing interval arithmetic.  Instead of a single real r, we use two, rmin and rmax, to specify the interval.  Arithmetic operators (+, -, ×, /) take a bit of care to implement correctly for these.  When dealing with multiple variables and their error bars, this is surprisingly useful: you at once know the full range (interval) of results, taking into the various terms intervals into account.  You can extend this by adding a third real, say peak, or median, but you need to be careful in defining its behaviour wrt. the above operators.
Generalizing this moves you into statistics and probability theory, especially probability distributions and probability density functions.  You can treat each numerical value as a probability density function, highest at the most likely values, and do basic arithmetic on them as you would on reals, getting the probability density function of the result.  Very useful in some cases, but rare.  Often the distributions are described using basis functions: say, as a (sum of) Gaussian(s) approximating the actual distribution, parametrised by two reals each: µ (mean/median/mode) and σ (standard deviation or "width").
(For example, consider you have numerical distributions representing say K variables and constants.  Instead of evaluating your arithmetic function NK times (given N samples per variable or constant) to construct a histogram of the results (approximating the probability density function of the result), you only evaluate the function once, but using probability density functions to describe each variable and constant.)

I just love how these things build on top of each other, and connect with seemingly completely unrelated (but useful) methods and descriptions.  That's why I love to explore these things, even if I never memorize the details, only absorb the connections and key details –– which can be used as tools to solve all sorts of odd problems, even if one needs to look up the nitty gritty details.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21661
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #70 on: October 06, 2021, 12:27:26 am »
And on a less related note, I find intervals very useful in analog design: match up the input and output voltage/current ranges, and that's more or less it.

Or more generally, in CS: if a function doesn't operate (nominally) on the full range of its parameters, then explicitly test the extrema, the edge conditions around / beyond where it's defined.  Not always easy or feasible (how do you define range on a C string?), but can be very illuminating when possible.  And it's more every year, it's feasible to exhaustively test ~40 bits of parameters these days, and more with statistical coverage or fuzzing.  If, of course, you can afford the time to do such tests :)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6242
  • Country: fi
    • My home page and email address
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #71 on: October 06, 2021, 02:44:08 am »
Interval arithmetic could be useful in embedded implementations when evaluating multiple variables, and decisions like "still within operational range" or "still outside operational range", avoiding issues related to oscillation (similar to Schmitt triggers).  Say, you might have a large number of sensors, or many noisy sensor readings, and use confidence intervals (say using the interval as the range that contains 75% of the samples).

I can imagine it could be useful in e.g. agriculture, where soil moisture sensors often fail (not the electronics, the sensor itself), so you might make a system of many cheap sensors, maybe with a red/green LED on top to show whether the sensor is considered useful or not (when directed by the system), so bad ones are trivial to replace.
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 21661
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: doing math, scaled up integer or float? and pre-processor stuff
« Reply #72 on: October 06, 2021, 09:26:03 am »
Kind of the opposite concept of fuzzy logic, though you could just as well say rather than being fuzzy, it's just got straight sides between what counts as each condition (above/in/below range).

Hmmm, you could do that multivariate as well, not that it would be exactly trivial to conceptualize, or use embedded.  Just that it reminds me of the error matrix of like a Kalman filter.

I suppose building it from the statistics of an array of sensors, you get kind of both, a statistical confidence as well as a fuzzy logic.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf