Author Topic: How correct is this video about the future of silicon  (Read 1592 times)

0 Members and 1 Guest are viewing this topic.

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
How correct is this video about the future of silicon
« on: February 25, 2023, 11:30:35 am »
Being an, out of touch of development, engineer I don't know if what is presented in this video is correct.

Have read about this tunneling effect mentioned as a hold back for smaller transistors before and understand it to some extend, but wonder about if the other materials mentioned can indeed overcome this problem.



Also in the comments someone wrote:

Quote
Silicon photonics is what I have the most hope in for in the next couple of decades. Imo, transitioning from electricity to light is just the most logical step forward. It will set moores law back by quite a bit, but the insane clock rate of the processors will make up for it. Most modern keyboards already use light to transmit signals.

And here I wonder about these keyboards using light to transmit the signals?

Sure my communicating with the eevblog server runs via fiber optics and I understand the working of it, but keyboards with a fiber optic connection, I have not heard of these before. Audio equipment using fiber, yes, but other computer peripherals, no.

Another interesting aspect of course is switching with photons and how this can be done. Is it possible to make something similar to a transistor for photons. But this falls outside of my knowledge too.

But the technology far beyond my grasp is quantum computing  :o
Will it indeed become a replacement for what we have now on our desks and how will they be programmed?
Being a very binary person used to sequential instructions, making sense with zeros and ones, wonder about the intermediate states of these qbits  :o

So a wide area of discussion possible here.

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #1 on: February 26, 2023, 09:05:44 am »
Apparently not much of interest these topics.  :-//

But to address one issue of what I mentioned, it dawned on me that the "modern keyboards" referred to using light to pass the signals are the ones that use infrared like remote controls. Can hardly call that modern since the infrared remote control is around since the nineteen eighties.

Also doubt it being used much with computers. Wireless based on radio signals is much more likely.

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How correct is this video about the future of silicon
« Reply #2 on: February 26, 2023, 11:37:35 am »
I mean, radio is light, too.  Don't be a visible chauvinist. ;D

As I understand it, at least as of the last state of technology I read much about -- photonics is a lie because it can't be miniaturized.  Roughly speaking, nothing can be smaller than the wavelength of the light itself (within the medium).  So, ~50nm sorts of things, but we're already well below that.  The other oft-touted premise is faster interconnects, which apparently hasn't come to fruition: it seems the burden of a transmitter, detector, and perhaps all the encoding/decoding to make that useful, makes the cross-chip propagation delay that much worse than just using stupid old wires and repeaters (buffers).

The equivalent circuit in play here is a loss-dominant (RC) transmission line, for most metal layers with respect to substrate or other metal layers as ground plane.  So the bandwidth / delay is... I forget what delay is, bandwidth I think drops quadratically with length, maybe delay too or maybe it's linear?  Anyway, clearly you combat that by adding repeaters -- just toss in an inverter every so often, which incurs some gate delay but by balancing both you at least avoid the quadratic cost.  And it hardly takes any additional space, so it's feasible for wide buses (256 bits+).

The alternative is an optical waveguide, which either uses metal layers and dielectric (SiO2), or additional materials (perhaps including a different index to make a waveguide akin to optical fiber), to guide waves without wires.  These tend to have higher Q factors, but are bulky and have other sometimes undesirable properties (dispersion: velocity varies with frequency).  So the interconnect itself might be good (give or take optical index, but it's likely better than the RC TL overall), but the problem is generating and receiving the signals themselves, which inevitably must be translated to logic-level voltages in wires, among other things.

The intermediate case you might ask about; and indeed there are modest-Q transmission lines (LC dominant).  AFAIK, these involve many metal layers, so that reasonable conductor thickness can be built up (for ~GHz on monolithic processes, layers are comparable or thinner than skin depth, remember!), and so that reasonable volume can be had for the dielectric (TLs are waveguides that support a DC mode: remember the wave flows in the space between conductors).  So you might have an, I don't know, eight metal layer process, and in those ~um of height, a "bathtub" structure can be made, with a wall of stitching vias through all layers making up the sidewalls (well, all layers that don't need to cross the TL -- obviously you can't go up or down forever in the stack if you also need signals or power to cross these TLs!), and solid metal on the bottom for the base, or perhaps substrate still (preferably at degenerate doping levels for higher conductivity).  The line itself is some metal layers near the top in the middle.  So, surrounded by SiO2 (an excellent dielectric, at least) and made of Al or Cu (mediocre given the tiny thicknesses, but usable).  The PCB equivalent is CPWG, but with more layers relieved under the trace, and vias densely packed (unlike for PCBs, "drilling" is 100% free on a stereolithographic process).

I don't know that such TLs are feasible for logic chips (CPUs etc.), haven't heard anything with respect to that really -- but they can be used for low-Q and resonant structures, like the inductors and tuned circuits in Wifi radios and etc.  A typical inductor might be, whatever, 20pH or something? (I forget the typical orders of magnitude..) And have a Q around 5-12 in the 10GHz range.  So, pretty piss in general, but definitely good enough to be usable.

And not to say there's no application for optics and waveguides.  It's basically the only feasible way to get signals off chip in the fractional THz range and up.  Hence we have RADAR chips today which are something like, phased array over the chip itself, with the package being a "transparent" (i.e., dielectric) window on top.  There are optical devices (probably 10G+ fiber transceivers are?, but I haven't read about them in a long time, I don't know what current tech is -- probably conventional or VCSEL laser diode plus opto-acoustic modulator, something like that?) with the optical interface integrated with the package, no faffing about on-board with any of that alignment mess.  (Not that you'd be likely to work with PCB designs of them anyways, as the modules come standard as well i.e. SFP and such -- unless you work at such an OEM, you're probably just putting in the module socket, even!)

So, as for the fate of silicon -- it's just clickbait.  Silicon is too damn useful to ever go away.  I mean, how do you know, of course you can't, but just for sake of comparison, we have all these fancy compound semiconductors and GHz computers and THz (nearly) radios now, and... we still have fuckin' CD4000 logic floating around, man.  It's not going away, it's only getting better as far as having everything from the highest to the lowest technology node available to us.

Put another way: technology very rarely if ever wholly discards things.  There aren't many stepping stones, at least in practice.  Vacuum tubes are probably the most important example of one -- we'd never have developed the level of technology we did without them, but we have superseded them in all but a few niche cases (and many of those are more like physics apparatus than electronic devices, anyway).  That is, I posit it was a necessary condition to have vacuum tubes, to then develop semiconductors.  The chemical processes alone (like precision analysis (AA, MS) and zone refining) require precise control systems (or wholly electronic signals) that an electronic control is almost mandatory; while limited aspects of the semiconductor process could likely be implemented mechanically (or say with mag amps), I doubt the whole thing could've been developed without something as general as the vacuum tube -- or the transistor, hence it's a self-sustaining technology, but not self-starting.  (Whereas vacuum tubes are tolerant enough of chemical variation, and mostly depend on mechanical tolerances; precision metalforming is a precondition, which had been ably solved through the 19th century.  And all the required chemical elements were available by then, too.)

And even then, it's not that vacuum tubes -- or, say, horses -- are gone.  They're just not economic dependencies now.  Almost everything that's been developed, will still be around in some capacity, albeit maybe just by enthusiasts and historians.

So, I don't see silicon ever going away.  It's just too goddamned useful.  Even if only used as a base to build on (see current eGaN transistors for instance), it's just that good.

As for the main active semiconductor, who knows; it seems likely that, at least the way things are going these days, technology will continue to spread out -- that is, literally, widen.  Clock speeds may even decrease, but total computation continues to rise, moving ahead with more generalized types of computation (like neural network stuff -- this is probably reading into current hype too much, but just to say, it's conceivable that NN, ML, AI whatever stuff will continue to miniaturize, and implement deeper and deeper in the hardware (not necessarily as like memristor arrays and stuff, that might not be consistent enough even with significant development -- but still, arrays of specialized "neuron"-like CPUs can do the job), and eventually lead to basically silicon (or whatever) "brains" that are "taught" by dumping in a bitstream, and then you get out whatever (still approximate) function you need computed.  Along with this, low-loss logic methods may be combined with multilayer chips to get unprecedented volumetric density and compute-per-watt performance.

And to facilitate these approximate methods (and perhaps some of the computing becomes stochastic by itself), new programming paradigms will have to be developed that check the results and correct them, either by reinforcement, or more fundamental (error correcting computing?) methods.  There's probably some kind of problem space similar to "no knowledge proofs" where, given an exact specification of some problem (function, algorithm, etc.?), a process can be devised to compute it in enough alternative ways simultaneously, and combine the results into a more-correct result, either giving complete coverage (all errors provably corrected), or repeat the process until it's arbitrarily correct.

Quantum computing seems unlikely to be better than luggably portable, and at significant power consumption at that, due to the need for cryocooling.  But noise coupling is an arbitrary thing, and some scheme may yet be found that can be effectively shielded from the disturbances of ambient temperature, meanwhile growing to arbitrary sizes (qubits).  In general, once more people are working in this domain (designing, programming, using), solutions to far more difficult problems will become available, especially to notoriously intractable problems in QM for example -- and so bootstrapping can continue.  There is provably no limit to the power available from condensed matter*, I mean, up to the limit of information theory (information flow is power), so we have plenty of orders of magnitude to continue, we just have to figure out how.

*There's a proof in physics, equivalent to the Halting Problem.  It's no accident that condensed matter physics is notoriously difficult; you could potentially be integrating over a manifold of computers, and, who knows what the hell that even means in terms of statistical mechanical properties!

Tim
« Last Edit: February 26, 2023, 12:04:40 pm by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: tom66, pcprogrammer

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #3 on: February 26, 2023, 04:23:35 pm »
Thanks Tim,

some bits cleared up and others darkened a bit.  :o

I mean, radio is light, too.  Don't be a visible chauvinist. ;D

Already wrote that I'm a binary person  :-DD

Light is light and radio signals are radio signals.

But I see what you mean.

I agree with you that silicon is very useful and available in abundance and don't see it fully replaced by something else. But what I read about it and what is shown in the video, it is reaching the limits of miniaturizing.

So it is just wait and see what is thought of next.

Thinking back to my early day's when starting to play with electronics, it was with germanium transistors and diodes and if I recall correctly they broke down more easily then the later silicon ones. But this is long long ago.

About the mix of technology, I was also wondering about the size problems that would bring. Even though fibers are very thin, the connection units are still big compared to it, and it would take a lot of it to interface between a cpu and memory. Sure serialization can be used, but that reduces the speed, limiting the gain of switching to light.

And even when faster systems are developed the software will ruin it again, as seen over the many years since the first home computer.  >:D

But kidding aside, today I looked a bit more into quantum computing, and have some understanding of it know and think it will always be a special branch of computing. They are apparently good for solving complex algorithms based on a lot of mathematics. Not really your average embedded controller that you write some nice code for in some mainstream language.



This video provided some insights, but most of it is still above my pay grade. With the superposition and entanglement I get this parallel universes vibe of of it. We put an apple in the box and as long as it is closed it can be any fruit you like, but as soon as we open it, it is an apple kind of logic.

We think it is the right answer, because the computer says it is the answer, but is it the right answer out of all the probabilities that can be the right answer. I will just have to wait and see what the future brings.

But it is good for the brain to ponder on these things once in a while.

Cheers,
Peter

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How correct is this video about the future of silicon
« Reply #4 on: February 26, 2023, 06:34:01 pm »
Yeah I have no idea if QC will bring anything more than niche or mainframe sorts of applications, but as with the other materials there's no proof whether it can/'t be miniaturized or generalized like anything else.

And there's a ton of "room at the bottom" for molecular computing, which might be anything from synthetic chemistry to genetic engineering, most likely carbon-based in either case.  The problem space for that kind of work is just so vast and complicated that we have no hope of working with it right now, but it's very much something QC can contribute to (and is presently, as I understand it, e.g. protein folding).  Maybe that's next, or maybe we need one more stepping stone first.

And yeah, QC is above my grade, to say the least.  I certainly know the basics of QM, well enough to understand a low-level description of QC hardware, say -- but how that relates to what space of equations you can program into, and solve, with such a system, I don't know.  The basic idea is to effectively run a superposition of states, then "refrigerate" away the "lossy" (read: by applied constraints*) energy, cooling it towards a desired final state, which happens to be your desired transformation.  In this way, most (all?) states of the system are explored, while the system is annealed towards a local (or hopefully global) minima.  (And global minima are reachable via tunneling between states, so that's big.)

*The setup is something like setting the coupling constants (phase and magnitude) between qubits.  Whereas on a conventional computer, you'd initialize the memory of a process then let it run, here the system is set up with initial conditions (at least, to the extent those states can be set, i.e. energy levels or even superpositions thereof on the qubits), and coupling factors between qubits, then left to run (perhaps with removing excess energy from the system, driving it towards a desired final state).  That's a very linearized sort of explanation, anyway; I don't know how much nonlinearity affects that (or if such are being used or researched at all, presently), but probably the problem space of nonlinear systems is even more vast than the space of "just doing anything at all" in QC right now.  That's also a very "analog computing" explanation, and, I don't know offhand how procedural, or functional or other conventional paradigms for that matter, QC can really get; these are all very much a matter of developing the very frameworks, to develop the tools, to develop the programs to actually run on the things.

So yeah, very early, very high-concept and hard to do any sort of work with them; in time, applications will get easier, more broadly applicable, and they may even become general, who knows.  That will surely take decades to figure out.  There's no instant revolution, you'll see it coming and before you know it it'll seem natural and great (and terrible, all together at once as technology has always been).


And there's carbon-based computing, which very much seems a likely "end game" of self-sustaining tech, much as life is already (by definition).  I suppose that's the most likely case where [pure crystalline] silicon goes away.  (Silicon, the element, likely remains useful for polymer backbones and structures -- silicones, and glass and ceramics, are likely to stick around a long while.  Heck, even plankton, and some (many??) grasses, themselves use silica structurally!)  But that could take centuries (with shades of transhumanism in there if you like).


With respect to germanium (and other early semiconductors: copper oxide, selenium, etc.), it's easier to process but performs poorer.  Carrier mobility is actually quite good (Ge is a bit better than Si even, especially for holes), but the maximum temp limit is quite low (corresponding to bandgap, which is lower than Si).  Actually that's part of the deal, lower bandgap <--> lower temp limit <--> higher n_i (intrinsic carrier concentration) <--> higher doping required for consistently p/n domains <--> more tolerant of impurities (n_i dominates over n_impurity).  So it's no accident it came first (sort of).  It takes really pure Si to do the job, which came along later.

Likewise, high bandgap materials are harder to process, and while they can offer higher on/off ratios, and gm, current density, etc. (especially because of compound-gradient effects like 2DEG, the key ingredient of a HEMT), they're also not likely to displace Si for computing because of the higher switching voltage (or something like that? I forget exactly how it works out despite gm being higher) and quite poor hole mobility (CMOS is basically exclusive to Si and Ge, and a few others that aren't nearly as easy to use).  Hence why for example that one Cray (Mark 2?) was GaAs NMOS -- good performance sure, but resistor pullups cost stupendous power consumption!

Or consider SiC: with higher bandgap and poorer mobility than Si, it seems an inconvenient choice; its high thermal conductivity, and exceptional breakdown voltage (critical field strength), win out however.  The chip inside say a 1200V 30A 200W MOSFET is absolutely dinky!  Bulk resistance still dominates Rds(on) (less substantial in MOSFETs probably, but extremely noticeable in SiC schottky).  Hence the need for die thinning, backside grinding, that stuff.  On top of the hardness and high-temperature processing, the sheer number of polymorphs it can crystallize into, and its propensity for pernicious defects like screw dislocations, are reasons for its late development.  The other thing it's got going for it is high temperature operation (again, mainly thanks to the higher bandgap), not that commercial parts benefit from it, but specialized applications (downhole sensors, mil/aerospace) can.

I forget why GaAs, GaN, etc. aren't suitable for high temperatures, or not normally anyway.  Other than packaging of course -- also the main reason AFAIK why commercial SiC parts are limited to 150/175C max, the damn epoxy!

Anyway, wide bandgap probably won't lead computing, is my understanding -- aside from possible exceptions like computers using quantum dots made of whatever materials; but they'll continue to serve high frequency and high power applications ably.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: pcprogrammer

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #5 on: February 27, 2023, 07:23:25 am »
Thanks again Tim,

being educated on mid level electronics, parts of what you wrote are above my pay grade.  |O

A bit of my background. After being schooled on a Dutch MTS (Mid level Technical School) in electronics I found a job designing and building microcontroller based systems to convert real life signals to MIDI. Did both the hardware and the firmware. Burned out due to stress. Took a couple of years to fully recover and started my own business as a one man worker. Did a couple of projects with again both hardware and firmware in different fields but slowly moved on to software only and in the end system management which I hated very much but brought in good money. Touched a lot of technology along the way but lost the low level nitty gritty. After that followed a six year period of building my own house. So something completely different.

Now since about seven years I'm back in the game with hobby electronics, but again mainly microcontroller based. Have lost a lot of my math and physics knowledge. Not really needed it during my working years. Found that this is not like riding a bike, which ability you never seem to forget. Also found that relearning things is getting harder with aging and my illness. With being chronically tired it does not sink in that well anymore.

But having a curious mind I try to read up on things and look for videos about it, only the problem with the internet is the amount of false information that is on it, and sieving it is not always easy.

I'm glad to read that it is not just me not fully grasping quantum computing. It seems, you at least were taught the basics of quantum mechanics.

Peter

Offline iMo

  • Super Contributor
  • ***
  • Posts: 5570
  • Country: va
Re: How correct is this video about the future of silicon
« Reply #6 on: February 27, 2023, 09:30:46 am »
During my studies the GaAs was "the big thing". Today you would hardly see it. The problem with GaAs is the As migrates off the structures at higher temperatures, afaik. The QC is still rather an academic buzzword, imho, you would need millions of qubits to make any meaningful computation, people say. We will live with Si related technologies for a long time, I bet..
Readers discretion is advised..
 

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How correct is this video about the future of silicon
« Reply #7 on: February 27, 2023, 09:52:29 am »
Ah, cheers :)

Yeah, I have physics and EE degrees (BS level) so that's more or less my level; although I doubt I could work my way through solving Schrodinger's equation anymore, heh.  It is indeed a skill of repetition!

Chronic fatigue, I can imagine... I'm prone enough to procrastinate as it is, I wouldn't need that to make it even worse. Not that you're necessarily so inclined, but, it's sure not going to help anyone.  I'm still in quite good health, but I mean I'm only 30s; the main things are probably lack of exercise; and maybe some late-onset ADHD, in that I recognize the symptoms, just not the severity, at least that some have... Maybe I should get it checked, but I manage; shrug.

The hardest part about just watching or reading up on stuff is 1. it stays very superficial for engagement points so it's hard to find much depth in the first place; 2. it's hard to push yourself to even just read the equations, and think through / feel what they mean; and somewhat corollary to these, 3. it's very tempting to take something at face value rather than engaging critically and asking, "is that really reasonable?"  Kinda case-in-point here, future speculation is -- I mean, one of the least confident questions we can ask about, right.  So any video on it can only be a bit of fun, nothing serious.  Maybe they do make material predictions, maybe not; maybe it affects what a viewer will do in the mean time, or in pursuit of such goals, maybe not; but it seems reasonable that one should probably try not to take it very seriously.  Mind, not criticizing your preferences/interests here; more rather explaining mine if anything, and, by way of connecting the two topics [futurism and autodidactism].

Along those lines, I've found this site quite illuminating from time to time: https://web.archive.org/web/20210426092231/https://mysite.du.edu/~jcalvert/index.htm it's dropped off the internet unfortunately (gosh, wonder if he's retired, or what), but Archive remembers all.  It's been a while, I should review many of the subjects again, really...
Or channels like 3blue1brown, excellent explanations, still usually not too mechanically in-depth for the same reasons but highly factual and earnest.  Oh also I remembered these, https://www.eevblog.com/forum/chat/fun-for-nerds/msg4725368/#msg4725368

As for less factual kinds, I've, well I like to think I've trained myself anyway, to recognize them, and while they can be quite tempting, entertaining, but then I gravitate away when I realize they tend to sensationalize, or are saying things that aren't well supported, or missing the point (lacks nuance), or factually incorrect, those sorts of things.  It's tricky to spot, it may take and it's tempting to just take things at face value, like I said.

It's also kind of a... "you just hate having fun" feel, I suppose someone might accuse me of; someone who does take such things at face value and enjoys such media without hangups (or more nuanced: in spite of them, even).  A very "ignorance is bliss" sort of position maybe, but I don't mean that pejoratively; the expression is more than a little factual I would say.  So, they might not be wrong with such an observation, I'll admit.  There's probably some elements/shades of... stoicism? asceticism?, in here too.

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #8 on: February 27, 2023, 10:30:16 am »
Ah only in your thirties  :)

That were the good times. I'm nearing my sixties  :palm:

... although I doubt I could work my way through solving Schrodinger's equation anymore, heh.  It is indeed a skill of repetition!

For sure. I used to be good at math taught in school, but have a hard time with understanding some of the more advanced equations now. Things like Pythagoras or trigonometry are still ok, but differential equations and such take digging very deep.

Would like to learn more about calculating coefficients for filters in DSP, but there are so many other things I also like to know or do too. Hard to choose and I often dose of half way through when reading stuff.

....  A very "ignorance is bliss" sort of position maybe ...

In a way that would be nice and make things simple, but fear I would have to loose a lot of IQ points to become that way.  :-DD

Thanks for the links you provided. Will have a look and see if I can learn something.

Peter

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #9 on: February 27, 2023, 11:12:32 am »
An example of an article that tells you nothing and somewhat contradicts itself.

https://www.uopeople.edu/blog/what-is-light-based-computing/

In the first part they tell that heat is one of the problems in electrons based computing and that light computing could be the answer. Another issue is consumed power, where light could be the solution.

Yet further down they mention the problems we face and need to be overcome.

Quote
However, there’s a trifecta of challenges that arise when attempting to achieve pure optical computing, namely: heat, power, and size.

Luckily google provided some other search terms to look into. Found some sites on "optical transistors". Hopefully they provide some more insights behind the working of optical computers.

Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How correct is this video about the future of silicon
« Reply #10 on: February 27, 2023, 06:00:54 pm »
For sure. I used to be good at math taught in school, but have a hard time with understanding some of the more advanced equations now. Things like Pythagoras or trigonometry are still ok, but differential equations and such take digging very deep.

Would like to learn more about calculating coefficients for filters in DSP, but there are so many other things I also like to know or do too. Hard to choose and I often dose of half way through when reading stuff.

Well now that's an easier one, at least!  Well, depending on what aspect you're looking for, heh.

First you need the basics: IIR or FIR type.  For analysis, the Z-transform is analogous to the Fourier transform; despite Z being simply a time shift (z^-1 means taking the previous sample x[n-1] while evaluating an equation at sample n), it works out the same, and indeed there exists an isomorphism, mapping the unit circle (poles within = stable, poles without = unstable) to the Fourier half-plane (left poles = stable, right poles = unstable).  So you can perfectly transform a given continuous-time (RLC) filter to a discrete-time (sampled) filter, within restrictions of sample rate and all that of course.

Of IIR, there are a few methods to use, of which the most popular / easy / stable / general is probably the biquad.  Five coefficients which can be tuned for any 2nd-order filter -- bandpass/stop, LP/HP... uhh and maybe all-pass, not sure?  In short, a rational 2nd order i.e. two poles and two zeroes, so, take your pick.  Solutions are straightforward enough:
https://www.earlevel.com/main/2013/10/13/biquad-calculator-v2/
View source to see the expressions in the JS, should be easy enough to understand even if you don't know JS exactly.

The derivation of these formulas, in turn, will be along the same lines as traditional analytical filters: position the poles/zeroes for a given desired response, or approximations to a straight-line (passband within x dB, stopband beyond y dB, etc.) response.

And a single biquad stage is 2nd order, what to do with higher orders?  Same thing we do with op-amps of course: each stage is two poles and cascade them as needed for a desired higher-order overall response.  So the first has close-in poles, and the next are higher Q, and etc. until you have the total response for whatever, Butterworth or Cheb. or etc. type you wanted.

Repeated 2nd-order blocks are preferred over a single higher-order block, because of sensitivity: while it's possible to implement an IIR filter that way, accumulator bits / coefficient precision becomes much more critical.  Kind of like -- I suppose by analogy, consider the Sallen-Key filter, which is said to be more sensitive to component values/tolerances than others like the MFB topology; well, suppose you go further, using a nth order Sallen-Key (yes, higher order active filters, around a single op-amp, are possible!), which you can imagine will be that much more sensitive to component values still; and then consider that the poles are positioned within the unit circle rather than on the plane and a small numerical error can easily push them outside the circle (poles/zeroes of a polynomial expression are highly sensitive to the coefficient values, it's an "ill conditioned" problem in general), and that error can arise from both rounding of the coefficients and truncation in the accumulator.

I suppose if you just use for example floating point (at float or double precision as the case may be), a higher-order filter might be tolerable, and maybe you can save a few cycles on the computation by making it a little more compact, or using fewer memory ops or whatever.  For most purposes, with say 16-bit fixed point, or 32-bit (or below) floats (standard or non-), 2nd order is probably preferred.

As for FIR, they're quite trivial: the coefficients are the impulse response, done and done.  You're literally cranking the convolution of the input with the filter's response, for each and every sample of the output.  So, you want a Gaussian response?  Plot a Gaussian hump and away you go!  Want something sharper?  Add some ringing, using whatever exponential or sinc shaped waveforms you might like; or choose any of the various well-known window functions for their respective spectral properties; it's all very easy to do, perfectly stable, and the only downside is, if you need a low frequency cutoff, well, you're going to need to convolve a heck of a lot of samples...

So IIR tends to be better for low cutoffs, but depending on how much CPU power / memory bandwidth you have available, either is often suitable.

So, hardest part, is as hard as any other filter -- polynomial solutions, of arbitrary order, approximating some frequency response.  Easiest part is, I'd say easier than analog filters, it's literally just the mechanical process of adding samples multiplied by coefficients (MAC operation).  In a FIR, no coefficients interact with each other, it's unconditionally stable; in IIR, it's equivalent to an active (analog) filter.

And sometimes (often, even?!) we don't even bother with that, because the shitty frequency response of a "boxcar" filter (rect[n] window, sliding average) is a suitable sacrifice for its simplicity: just toss each sample into a circular buffer, add the latest (nth) sample to the accumulator and subtract the last (n-Nth) sample -- the one that just fell out of the buffer.  Absolutely zero multiplication required (well, aside from normalizing the output gain), and all it needs is memory -- of which only two accesses are needed per sample.  Which is a kind of example (I think?) of a CIC ("cascaded integrator comb") filter, notice the summation per sample (the output/accumulator value) is taking the previous value plus the difference between nth and n-Nth samples -- it's the integral of a derivative.  Which does mean the value could become offset accidentally (integral of a derivative equals the function "plus a constant", which is to say, the DC offset is undefined in general), but in a deterministic computer, that "accident" by definition will never happen and the "plus a constant" equals the initialized offset (which in turn makes it a definite integral starting from zero, "plus a constant" accounted for).

As for diff eq, I'm not too into it, but I never really needed more than linear equations anyway (with, again, polynomial solutions -- filters, control loops, etc.).  And anything worse I'd gladly just plug into a numerical (or CAS) solver; particular, analytical solutions are likely to be more curiosity than practical (e.g. Bessel functions that aren't any easier to manipulate).

There is one interesting problem I've played with: the temperature distribution on a flat sheet, for a circular isothermal heat source.  Assuming convection proportional to temp rise, we have heat loss at any given point on the sheet depending on its temperature, while also spreading out (through the sheet) to a larger radius, where there's more (differential) area to dissipate heat into, etc.  Obviously the sheet won't be uniformly heated -- at a basic guess, it should be reciprocal or logarithmic with distance, because of the available area at a given radius -- but because the loss depends on temp as well, it must be something just different from that.  Okay well, set up the equations, push things around a bit and, fair enough, there's an equation I can't solve.  Let me see what Wolfram Alpha thinks of it. ;D  Turns out it's a Bessel function, with the first zero I think at the edge of the sheet (for some circular outer edge; which we can take to infinity if we like).  I forget the exact proportions that go into and around the function, but yeah, it's hottest in the center, dropping as radius goes up, and not quite as 1/r or ln(r/r_0) or anything, it's a little bit different.  So that was a cool problem.

Suppose I should go set that up again and see what the exact ratios were...

I recall plugging in the heat spreading rate of PCB stock (approximately anyway; it's mostly due to the copper anyway, so take the total thickness across however many layers you've poured/planed) and getting about an inch or two radius -- which is just as we expect for the hot spot on a 2oz 2-layer PCB, and for say a D(2)PAK or so, the total power dissipation (for reasonable max temp rise) is around... 5W or so, I think it was?  So, even as poor an estimate as proportional convection loss is (it's actually steeper than proportional, and depends on orientation, and bits above the hot-part have a lower coefficient than below because, well, the rising air is already warmed!...etc...), it's not too bad overall.

Other examples I've applied diff. eq. to, include uniform heat dissipation along a heat spreader sunk at one end (temp goes quadratically with distance, vertex at the uncooled end -- makes sense), or the hold-up time of an SMPS (also a quadratic).  Quadratics and exponentials are nice as you need no tricks to solve them (or just one simple trick for exponentials), just integrate and go.

Or the uh... what are some other recent workings-out, *thumbs through notes*, oh yeah:
- A couple, just, simple proofs reminding of certain integral solutions (half charge/energy point of an exponential decay; RMS of a wave)
- Simplifying complex arithmetic (for JS implementation)
- Transfer functions for certain RLC networks
- Or uh, going back a couple years, I derived a "tuning" equation for the series-resonant induction heater circuit.  Despite being 3 elements in a 2nd-order system, this involves a 4th order polynomial solution (basically because the solution isn't precisely symmetrical for ±ω i.e. can be reduced in terms of ω^2 --> ω, which is to say, of the form a ω^4 + b ω^2 + c = 0), but the solution is very close to the nearest quadratic* so a numerical solution converges rapidly.

*Hm, I wonder in what sense polynomials can be projected into each other; in the sense of projective spaces, mapping a higher-dimension space to a lower one.  I suppose you have your choice of projections though, both in the linear algebraic sense (take whatever [linear] map you like), and any kind of polynomial (or even more complicated) function you might apply.  (Compare 3D perspective projection, where P:[x, y, z] --> [x/z, y/z], a rational relation.)

Well, I digress...

Tim
« Last Edit: February 27, 2023, 06:23:05 pm by T3sl4co1l »
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: pcprogrammer

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #11 on: February 27, 2023, 07:23:08 pm »
Well now that's an easier one, at least!  Well, depending on what aspect you're looking for, heh.

Might be for you, but for me most of it is in the dark.  :o

I have played with simple IIR filters and can do calculations on them but they not always bring what I want. I'm talking audio synthesizer stuff here. With some trickery it is even possible to have them oscillate just like filters used in actual analog synths. Calculations can be similar to simple analog RC filters for what I understand from it.

It is the I think FIR filters where you have a set of coefficients for the multiply add on the different delayed samples that puzzle me. But based on what you wrote the biquad is IIR, so now I'm confused.

Have to reread the book on DSP I have. That is the problem for me with a lot of this stuff, it does not stick at the first read anymore. Here too, taking it into practice and repeat it over and over is key.

I do grasp the principles a bit, but the calculations for the coefficients for a specific -3db point elude me. I will take a look at the website and the code you linked. JS is at least something I know well.

This filter stuff was what I was playing with before I started to reverse engineer the FNIRSI 1013D scope. I bought that scope to have a smaller device on my desk to look at the signals coming from my Siel Opera 6 emulation module I made. Yet another project that is lying around waiting for the software to be finished  :palm:

It is based on six STM32F103 and one STM32F303 modules. One of the F103's is the master module that controls the other six modules. Each of these is to generate a single voice based on the architecture of the Siel Opera 6. The F303 module is used for the DAC in it. It outputs the mix of the 6 voices. It still needs a lot of work.

Cheers,
Peter



Offline T3sl4co1l

  • Super Contributor
  • ***
  • Posts: 22436
  • Country: us
  • Expert, Analog Electronics, PCB Layout, EMC
    • Seven Transistor Labs
Re: How correct is this video about the future of silicon
« Reply #12 on: February 27, 2023, 10:26:44 pm »
I have played with simple IIR filters and can do calculations on them but they not always bring what I want. I'm talking audio synthesizer stuff here. With some trickery it is even possible to have them oscillate just like filters used in actual analog synths. Calculations can be similar to simple analog RC filters for what I understand from it.

Ah, neat.  And yep, of course you'll get more accurate tones with a DDS (and total freedom of waveform at that!), but there might be reasons to still want a coarsely tunable feedback sort of thing.


Quote
It is the I think FIR filters where you have a set of coefficients for the multiply add on the different delayed samples that puzzle me. But based on what you wrote the biquad is IIR, so now I'm confused.

Yes, exactly.  Biquad is a feedback (IIR) type.  Or at least... the kind that's what I'm thinking of.  Here's the implementation I used last time I played around,
https://github.com/T3sl4co1l/Reverb/blob/master/dsp.c#L292
well... not the clearest thing, it's got some optimization cluttering things up there.

In general, a DSP filter is the convolution of instant and previous input and output samples:
$$ y[n] = x[n] a_0 + x[n-1] a_1 + ... + y[n-1] b_1 + y[n-2] b_2 + ... $$

\$x[n]\$ is the latest input sample, \$y[n]\$ is the new output sample, and \$[n-1]\$ etc. are past samples of input or output.  The coefficients a and b have finite span of course (above some n, their values are zero).

FIR simply has all \$b_n = 0\$.  IIR has any combination, but usually restricted to \$a_0\$, \$a_1\$, \$a_2\$, \$b_1\$ and \$b_2\$ as in the biquad case.

Also I might be using the symbols backwards here, but anyway it's just multiply-accumulate, a convolution over previous input samples, and optionally outputs.

The... I forget if the biquad precisely uses coefficients in this form or what; in any case the code above works on a ~trivial transformation of the values from the calculator linked earlier.  Which, let me see here; I wrote a bookmarklet (remember those?) to do that.  You load up the calculator and set up the values as needed, then run this:

Code: [Select]
(function() {
    var count = 0, ret = ["", ""], n;
    document.getElementById('biquad_coefsList').innerHTML.split(" ").forEach(function(x) {
        n = Math.round(16384 * Number.parseFloat(x));
        if (n) {
            if (count >= 3) {
                n = -n;
            }
            ret[0] += ", " + n;
            ret[1] += " " + ("0000" + (n + 0x10000).toString(16)).slice(-4);
            count++;
        }
    } );
    ret = "a0, a1, a2, b1, b2:\n" +"Dec: " + ret[0].slice(2) + '\n' +"Hex: " + ret[1].slice(1);
    console.log(ret);
    alert(ret);
} )();

Oh yeah, the later coefficients are negated, just to keep the flow straightforward -- the data are arranged in memory for one fell swoop of MAC-ing, which leads to some shuffling around of them.  Whether that's actually optimal on AVR or other, whatever; it works well enough though.

...Which, is indeed as commented.  That T3sl4co1l guy seems like he was on top of things after all!...

The 16384 constant of course gives 14 bits fractional, 1 bit integer, and 1 bit sign.  This limits some values I think?  But for most values it seems alright.


Quote
Have to reread the book on DSP I have. That is the problem for me with a lot of this stuff, it does not stick at the first read anymore. Here too, taking it into practice and repeat it over and over is key.

I do grasp the principles a bit, but the calculations for the coefficients for a specific -3db point elude me. I will take a look at the website and the code you linked. JS is at least something I know well.

When the change per sample is small, you can approximate it just as you'd solve an RC circuit, taking the differential equation form and changing \$d\$'s to \$\Delta\$'s.  Think that was where I derived the DC offset filter (dspHighpass()).  You need reasonable precision of course, otherwise it just locks up in place (hence the 32-bit accumulator there).

As cutoff gets closer to Fs/2, everything warps, and you start needing more... uh, tans or tanhs or something I think? I already forget much of the formulas myself.  Which, if you've never done it, deriving those a few times might be a good exercise I suppose?  I should maybe do that again myself, but at least for the time being, I'm happy enough having a general sense about it -- basic order-of-magnitude confidence checking and that -- and just run a calculator to get the exact values when needed.

As for the nature of those formulas, the derivations -- like I said, at the heart is a transform, and you're basically solving polynomials in that transformed space as an auxiliary equation to the differential difference equation, and the poles/zeroes drop out along the way which happen to tell you something about the frequency response.  Worse comes to worse, you can do a 0-order hold and Fourier transform the signals -- but working in Z domain straightaway is easier of course.

And yeah, DSP filters kinda suck, in general.  Especially as you approach Fs/2.  That's more or less your price of doing everything step by step -- having to do a lot of steps to accomplish the same thing.  The asymptotic response is still the order of the filter (or less), but the sharpness is different, and you tend to need more stages for a given response.  So maybe you just needed to lean into things a bit deeper.

It's all kinda trivial when you can toss more CPU power at it -- maybe it's less fun this way, but you could always consider tossing it on an rPi or whatever and run everything, well you could run parallel processes even if you wanted, but just chucking everything into a sample loop one after the next and chaining them all together will easily be done on a platform like that.  Networking STM32s sounds perhaps a little nightmarish in contrast...(says the guy who made a DSP effect on a poor unsuspecting XMEGA, though :-DD ).

Also, you should at least be able to do a couple voices, or a lot of filtering and whatnot effects, on the '303, no? *shrug*

Anyway, I'm sure there are reasons (even if it's just "having fun" ... or having "fun" as the case sometimes may be). :)

Tim
Seven Transistor Labs, LLC
Electronic design, from concept to prototype.
Bringing a project to life?  Send me a message!
 
The following users thanked this post: pcprogrammer

Offline pcprogrammerTopic starter

  • Super Contributor
  • ***
  • Posts: 4670
  • Country: nl
Re: How correct is this video about the future of silicon
« Reply #13 on: February 28, 2023, 07:12:55 am »
Yes, exactly.  Biquad is a feedback (IIR) type.  Or at least... the kind that's what I'm thinking of. 

After looking at the website with the calculator and following a link on it to info about the biquad, I came to that realization. The diagrams show feedback, which is the cause of the "infinite" behavior. That still stucked from reading the DSP book  :)

As cutoff gets closer to Fs/2, everything warps, and you start needing more...

This I experienced in several tests I did with the simple IIR filters. Best to stay well below it.

Networking STM32s sounds perhaps a little nightmarish in contrast...

Also, you should at least be able to do a couple voices, or a lot of filtering and whatnot effects, on the '303, no? *shrug*

Anyway, I'm sure there are reasons (even if it's just "having fun" ... or having "fun" as the case sometimes may be). :)

That was indeed part of the challenge, to device some system to have a lot of data passing through the modules. I'm using SPI for the signals and UART for the parameters. This in combination with DMA takes no processor time and works well. The master module generates common LFO signals and sends these to every module via one of the SPI links. The voice modules inject their voice output into the buffer that is repeatedly send to the F303 module.

And sure multiple voices could probably be done on the F303. An AVR can do multiple voices of simple synthesis, but the "fun" is in the whole setup being a replica of the original.

I wrote an emulator for it in 2014 under windows XP with functioning synthesis and it would be easy to make something on a raspberry pi, but I wanted to do something with the bunch of bluepills I bought.  :-DD

Peter


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf