Author Topic: Neat algorithms  (Read 9340 times)

0 Members and 1 Guest are viewing this topic.

Offline legacy

  • Super Contributor
  • ***
  • !
  • Posts: 4415
  • Country: ch
Re: Neat algorithms
« Reply #200 on: September 26, 2019, 10:02:18 am »


Interview in Denmark

  • dude - What is the best way to express a sorting algorithm?
  • me - something like (... I pulled out my PDA from my pocket, and wrote the code to show him the result ... ) qsort([]) -> []; qsort([Pivot|Rest]) -> qsort([ X || X <- Rest, X < Pivot]) ++ [Pivot] ++ qsort([ Y || Y <- Rest, Y >= Pivot])
  • dude - ... (he looked at his shoes ... confused ... and didn't say a word ...)
  • me - hey? doesn't it look good? it's the shortest hence best way to answer your question!
  • dude - why is it supposed to be the best way?
  • me - the less code you write the fewer probability of bugs
  • dude - are you willing to stay around the arctic pole during winter, even when it's most cold?
  • me - yup
  • dude - hired!

Moral of the story, a piece of Erlang code can save your day better than 200 or 2K code of C can do.
And even give you a free-ticket to Greenland :D
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 4724
  • Country: fr
Re: Neat algorithms
« Reply #201 on: September 26, 2019, 02:10:08 pm »
(Just a note: this problem: https://projecteuler.net/problem=70 is typical of what I said earlier about primes and thinking outside the box. A naive approach will take forever to complete. I managed to implement that so that it takes about 30 ms to complete (and give a correct answer ;D ) on my Core i7/5930K.

Not a very hard one, but if you haven't done it yet, you'll probably spend a little while...
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 4724
  • Country: fr
Re: Neat algorithms
« Reply #202 on: September 26, 2019, 02:17:26 pm »
(...)
  • me - something like (... I pulled out my PDA from my pocket, and wrote the code to show him the result ... ) qsort([]) -> []; qsort([Pivot|Rest]) -> qsort([ X || X <- Rest, X < Pivot]) ++ [Pivot] ++ qsort([ Y || Y <- Rest, Y >= Pivot])
(...)

Ahah, nice. Too bad Erlang sometimes almost looks like the Brainfuck language: https://en.wikipedia.org/wiki/Brainfuck
 :-DD (just slightly exxagerating)

Oh, and while we're having a laugh, let's laugh (now some people may frown): here is the Whitespace language: https://en.wikipedia.org/wiki/Whitespace_%28programming_language%29
which can be loosely described as some variant of Brainfuck for Python users. :-DD
 

Offline taktoa

  • Newbie
  • Posts: 2
  • Country: us
    • taktoa.me
Re: Neat algorithms
« Reply #203 on: December 01, 2019, 09:27:39 pm »
One bleeding-edge algorithm quite relevant to electronics engineering that I find is not very well known is "compressive sensing". Every ADC depends on the Shannon-Nyquist theorem, which shows that if you sample at double the max frequency of a bandlimited signal, you can perfectly reconstruct it. What is not well known is that that is a sufficient condition for perfect reconstruction; if you make some assumptions about the signals you are likely to receive, you can do much better (i.e.: reconstruct a signal with fewer samples). In compressive sensing, the assumption is that there is some linear transformation you can apply to the signal that makes it sparse (have a lot of zeros); in other words, the signal must be compressible in some sense (noise is typically not compressible, but you also typically don't care if the noise in the signal you are trying to measure is replaced with different noise that has a similar distribution).

The typical setup for compressive sensing is that you have some matrix/linear transform F that when applied to a signal gives you a signal that has small values. For example, the discrete cosine transform has this property when applied to photographs (this is how JPEG works). Then you cook up a matrix M that, when multiplied by F on the right satisfies this technical thing called the restricted isometry property (RIP; this technicality doesn't actually matter much, for reasons I'll get into later).  Then you take each row of the matrix and you find some physical way to compute the dot product of this row with the vector representing your signal (for example, if you want to compute the dot product of an optical image with an arbitrary vector, you could bounce it off of a digital micromirror device and then focus it into a lens that is shining onto a single pixel). This gives you a new vector y, which has one element for each row of your matrix. Then you just have to solve the underconstrained (because the number of rows of MF / number of measurements is intentionally small) linear system MFx = y. Although it is underconstrained, we know something special about x: it has a lot of zeros! So we try to find the solution that maximizes the number of zeros, which is sometimes called the L0 "norm" (I use quotation marks because it is not actually a norm). It turns out that this is really difficult -- NP-complete, but the RIP we posited early saves us, since there is a theorem that if MF has the RIP and you maximize the L1 norm instead of L0, you get a solution to the L0 maximization problem. There is also a theorem that almost all matrices satisfy the RIP (i.e.: there are infinite matrices but only finitely many that don't satisfy RIP), so it suffices to simply choose M at random.

Some cool applications of this are single-pixel cameras (using the DMD approach I mentioned earlier), better MRI reconstruction, etc. I tried implementing one of the core algorithms (the "in-crowd" algorithm for basis pursuit denoising) required to do this, but haven't finished.
Software engineer, dabbling in electronics after a long time away from the subject. I have gone under the name Faraday's Cage in the past.
 

Offline coppice

  • Super Contributor
  • ***
  • Posts: 5155
  • Country: gb
Re: Neat algorithms
« Reply #204 on: December 01, 2019, 09:36:25 pm »
One bleeding-edge algorithm quite relevant to electronics engineering that I find is not very well known is "compressive sensing".
Compressive sensing is an interesting concept. It has been discussed on this forum a few times.
 
The following users thanked this post: taktoa

Offline Yansi

  • Super Contributor
  • ***
  • Posts: 3115
  • Country: 00
  • STM32, STM8, AVR, 8051
Re: Neat algorithms
« Reply #205 on: December 01, 2019, 09:41:36 pm »
Does anyone have any recommendation for a rather fast  single precision float logarithm for ARM? (M4, M7 cores with FPU)?

I don't care whether log natural or log2. I take the faster one!  (Need for some a real time audio dynamic processing).  Guessing log2 shall be easiest of them due to the float nature of storing the mantissa and exponent separately, taking the log2 just of the mantissa and add the exponent.
« Last Edit: December 01, 2019, 09:49:09 pm by Yansi »
 

Offline taktoa

  • Newbie
  • Posts: 2
  • Country: us
    • taktoa.me
Re: Neat algorithms
« Reply #206 on: December 01, 2019, 09:58:13 pm »
Another neat algorithm is one for regular expression matching implemented by a tool called icgrep. Essentially, if you have a regular expression R, you look at all the primitive matching expressions (e.g.: character classes and characters), and you create a bitstream with 1 bit for every character in the text you are trying to search through, which is 0 if the character class matches and 1 if it doesn't. Then you build up more complicated regexes by taking bitwise OR (for union) or bitwise AND (for intersection) or bitwise NOT (for complement) to produce new bitstreams. The question then remains: how do we handle the * operator? It turns out that we can express it in terms of + in a clever way -- it exploits the carrying nature of +, though I don't remember the exact details on how it works.

EDIT: here's the paper: https://dl.acm.org/citation.cfm?id=2628079
« Last Edit: December 01, 2019, 10:00:19 pm by taktoa »
Software engineer, dabbling in electronics after a long time away from the subject. I have gone under the name Faraday's Cage in the past.
 

Online T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 14735
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: Neat algorithms
« Reply #207 on: December 02, 2019, 05:49:34 am »
Does anyone have any recommendation for a rather fast  single precision float logarithm for ARM? (M4, M7 cores with FPU)?

I don't care whether log natural or log2. I take the faster one!  (Need for some a real time audio dynamic processing).  Guessing log2 shall be easiest of them due to the float nature of storing the mantissa and exponent separately, taking the log2 just of the mantissa and add the exponent.

With FPU?  Just use the compiler as is? ???  I don't know that you'll do much faster than that...

For integer machines with multiply, repeated squaring and shifting gets subsequent bits of the result (the MSB shifted out is the next bit of the result); similarly, for machines without multiply, there's the Feynman algorithm (which I think was listed earlier in this thread?).  Expect to take ballpark a hundred instruction cycles to compute results of typical width on typical machines.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 2703
  • Country: si
Re: Neat algorithms
« Reply #208 on: December 02, 2019, 07:48:19 am »
Does anyone have any recommendation for a rather fast  single precision float logarithm for ARM? (M4, M7 cores with FPU)?

I don't care whether log natural or log2. I take the faster one!  (Need for some a real time audio dynamic processing).  Guessing log2 shall be easiest of them due to the float nature of storing the mantissa and exponent separately, taking the log2 just of the mantissa and add the exponent.

Yep just use logf(x) or log2f(x) from math.h

The C compilers built in libraries tend to have reasonable performance. But if you want things to be really fast and don't mind some precision loss you can use a lookup table with interpolation.

For fast optimized ways of doing more complex math there is the official ARM CMSIS-DSP library that contains inline hand optimized assembler code. It supports all sorts of math with vectors, complex numbers, matrices, FFT... etc in both float and fixed point.
 

Offline Yansi

  • Super Contributor
  • ***
  • Posts: 3115
  • Country: 00
  • STM32, STM8, AVR, 8051
Re: Neat algorithms
« Reply #209 on: December 02, 2019, 11:44:06 am »
You guys are wrong.

Math C library is not optimized for speed, but for best accuracy. I do not need the accuracy to the last bit or digit, hence I can make some approximations to make things significantly faster.

For example, by using fastlog2 from this library, I can get pretty good accuracy of (from a brief test) better than 0.1% for less than half the clock cycles. Task for me is I just need to figure out how much accuracy do I actually need).  As it is for an audio application, you are not probably gonna hear the difference of 0.1dB, let alone 0.01dB.

CMSIS DSP is not always the best suited library for all, neither does it provide any logarithm or exp (power) functions - at least I can not see one.

The library log2f  runs about 140 cycles on Cortex M4 (ARMCC, -O0).  I have a lot of processing power left for it currently to run even with these monstrosities, but I don't like it much. :)

EDIT: Forgot the link to the library! https://github.com/romeric/fastapprox
« Last Edit: December 02, 2019, 11:47:02 am by Yansi »
 

Offline nigelwright7557

  • Regular Contributor
  • *
  • Posts: 231
  • Country: gb
    • Murton-Pike Systems
Re: Neat algorithms
« Reply #210 on: December 05, 2019, 03:03:19 pm »
If you are creating new algorithms and especially complicated ones always flowchart it first and get that right before coding.
One of the most complicated bits of code I wrote was a circuit board auto-router.
Even after careful flow charting it still wasnt quite right.
You can get caught out by things like getting stuck in infinite loops going around and round.
Testing is absolutely vital and you need to throw lots of different data at an algorithm to try and catch it out.


PCBCAD51/PCBCAD360/PCBCAD720 PCB design software https://www.murtonpikesystems.co.uk
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 4724
  • Country: fr
Re: Neat algorithms
« Reply #211 on: December 05, 2019, 04:43:42 pm »
As we discussed earlier, sometimes the best approach to implement a costly operation is to do without it altogether. You can often rewrite your computations so that they'll use simpler/faster operations.

For instance, it's very common to avoid using the square root function. Likewise, you can often transform your formulas so you can avoid using log.

If this approach fails for any reason, and you absolutely need a log function, there are ways to implement it efficiently (at the expense of accuracy). There are polynomial approximations, for instance, which I think is what the above posted library does (along with a trick using IEEE FP format). It should be pretty fast already. If it's still not fast enough, and you can deal with even less accuracy, there's a very simple way of getting a log2(x). Scale x so you make it an integer with the max resolution you can, and then you can compute the integer log2(n) ultra fast using the ARM clz instruction for instance. If you're using GCC and don't want to mess with assembly directly, you can use one of the __builtin_clz() built-in functions.

Example of fast integer log2 in C (GCC only though!):

Code: [Select]
static inline uint8_t Log2Int(uint32_t n)
{
return (uint8_t)(8*sizeof(n) - 1 - ((uint8_t) __builtin_clzl(n)));
}

Then you can scale it back (scaling x with a multiplication such that n = [a.x] => log2(x) ~ Log2Int(n) - log2(a)
Of course the resolution is limited, but it could be enough.
If you need more accuracy, you can further compute the fractional part of the log2 up to a certain number of bits. I wrote the following function for instance:

Code: [Select]
// Binary log2(n) with nFrac fractional bits.
// Assumes: n >= 1
//
float BinaryLog2b(uint32_t n, uint8_t nFrac)
{
const uint32_t n1 = 1;
uint8_t nLog2, i;
float x, y, b;

nLog2 = Log2Int(n);
x = ((float) n) / ((float)(n1 << nLog2));
y = (float) nLog2;

for (i = 0, b = 0.5f; i < nFrac; i++, b *= 0.5f)
{
x = x * x;
if (x >= 2.0f)
{
y += b;
x *= 0.5f;
}
}

return y;
}

Tested on x86-64: with nFrac = 8, for instance, it's still 3 times faster than the std math lib log2() function, and with a decent accuracy. But it takes an integer input. Could typically be used with fixed-point, or after scaling a FP number as I said.


« Last Edit: December 05, 2019, 05:01:03 pm by SiliconWizard »
 

Offline SiliconWizard

  • Super Contributor
  • ***
  • Posts: 4724
  • Country: fr
Re: Neat algorithms
« Reply #212 on: December 05, 2019, 04:57:50 pm »
Testing is absolutely vital and you need to throw lots of different data at an algorithm to try and catch it out.

I completely second this. Too many people IMO, particularly from those that are using unit testing (which is already a good thing), are overlooking this, because of the "safety net" feeling.

Feeding your algorithms with A LOT of data, especially data from sources that are not YOU, makes all the difference. IME, the amount of bugs (and their "trickiness") you can catch this way is an order of magnitude higher than with unit testing alone (which doesn't mean you shouldn't do unit testing...)

 

Offline NorthGuy

  • Super Contributor
  • ***
  • Posts: 1945
  • Country: ca
Re: Neat algorithms
« Reply #213 on: December 05, 2019, 07:28:09 pm »
Testing is absolutely vital and you need to throw lots of different data at an algorithm to try and catch it out.

One of the best testing method is trying to deliberately create a situation which would break your algorithm.
 

Offline mrflibble

  • Super Contributor
  • ***
  • Posts: 1988
  • Country: nl
Re: Neat algorithms
« Reply #214 on: February 27, 2020, 12:18:31 am »
Recently came across a couple of linear programming formulations for graph theoretical problems. Which was pretty handy, because it was exactly what I was googling for at that time. Image that. ;D

The problem in particular was how to constrain a graph such that it will not contain any loops. Turns out you can do that by making an LP for the maximum average degree of the graph in question. See chapter 11 in this here pdf:
https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/graph-theory-algorithms-book/latest-r1908.pdf

Related links:

For those interested, be sure to check the links for both the old and the new google code archive. The old & new both contains unique and IMO useful stuff that is not included in the other. Case in point, the new one does not contain the Linear Programming chapter. And best grab the bitbucket repo while you can. No way of knowing if (read: when) bitbucket is going further down the drain.

 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf