Author Topic: Wouldnt wave function collapse allow for instant information transfer?  (Read 2309 times)

0 Members and 1 Guest are viewing this topic.

Offline ELS122Topic starter

  • Frequent Contributor
  • **
  • Posts: 935
  • Country: 00
Wouldnt the wave function collapse allow for instant information transfer over any distance? or am I misunderstanding how it works?
And for example allow for us to see if the wave function of some particle has already been collapsed, hinting towards it being observed by some alien?
« Last Edit: May 16, 2024, 05:52:36 am by ELS122 »
 

Offline BU508A

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: de
  • Per aspera ad astra
Wouldnt the wave function collapse allow for instant information transfer over any distance?

No.
“Chaos is found in greatest abundance wherever order is being sought. It always defeats order, because it is better organized.”            - Terry Pratchett -
 

Offline AVGresponding

  • Super Contributor
  • ***
  • Posts: 4725
  • Country: england
  • Exploring Rabbit Holes Since The 1970s
Waveform function collapse does not generate "new" information; it merely reveals existing but previously hidden information
nuqDaq yuch Dapol?
Addiction count: Agilent-AVO-BlackStar-Brymen-Chauvin Arnoux-Fluke-GenRad-Hameg-HP-Keithley-IsoTech-Mastech-Megger-Metrix-Micronta-Racal-RFL-Siglent-Solartron-Tektronix-Thurlby-Time Electronics-TTi-UniT
 

Offline switchabl

  • Frequent Contributor
  • **
  • Posts: 445
  • Country: de
No, and this is known as the No-Communication Theorem (NCT) in quantum information theory.

Let's say Alice prepares pairs of entangled photons, keeping one and sending the other out into space. When Alien-Bob performs measurements on those, this changes the overall quantum state. But the NCT shows that it doesn't change the statistics of any measurement Alice can do on her photons. No experiment she can do would allow her to know if Alien-Bob ever detected the other half.

It is only when they finally get to talk over a normal communication channel and compare notes that they will find their measurements are correlated.
 
The following users thanked this post: HuronKing, ELS122

Offline HuronKing

  • Regular Contributor
  • *
  • Posts: 237
  • Country: us
Waveform function collapse does not generate "new" information; it merely reveals existing but previously hidden information

It's doubtful there is 'hidden' information because of Bell's Theorem and the issue of non-locality being required to conform to quantum mechanics - which I think the OP is getting at. Not the creation of 'new' information, but the encoding of information when entangled particles are created and then separated.

switchabl's explanation is correct in referencing the No-Communication Theorem.

In my goofier imaginations, I've wondered if a resolution to the apparent paradox is that light speed really is anisotropic and the entangled particles DO change instantaneously... in one direction, but it all averages out to speed c in the wash whenever anyone tries to confirm the transmission thus still obeying relativity which is built around the two-way speed of light.  ::)
 
The following users thanked this post: T3sl4co1l

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14705
  • Country: fr
Wouldnt the wave function collapse allow for instant information transfer over any distance? or am I misunderstanding how it works?

You are, but don't worry - nobody understands.
 

Offline AVGresponding

  • Super Contributor
  • ***
  • Posts: 4725
  • Country: england
  • Exploring Rabbit Holes Since The 1970s
Waveform function collapse does not generate "new" information; it merely reveals existing but previously hidden information

It's doubtful there is 'hidden' information because of Bell's Theorem and the issue of non-locality being required to conform to quantum mechanics - which I think the OP is getting at. Not the creation of 'new' information, but the encoding of information when entangled particles are created and then separated.

switchabl's explanation is correct in referencing the No-Communication Theorem.

In my goofier imaginations, I've wondered if a resolution to the apparent paradox is that light speed really is anisotropic and the entangled particles DO change instantaneously... in one direction, but it all averages out to speed c in the wash whenever anyone tries to confirm the transmission thus still obeying relativity which is built around the two-way speed of light.  ::)

I was trying to make it as simple and understandable as possible.

The entangled particles do not change, at all. The "encoding of information" happens at the point of entanglement, not at the point of waveform collapse. Any imposition upon an entangled particle after separation breaks the entanglement.
nuqDaq yuch Dapol?
Addiction count: Agilent-AVO-BlackStar-Brymen-Chauvin Arnoux-Fluke-GenRad-Hameg-HP-Keithley-IsoTech-Mastech-Megger-Metrix-Micronta-Racal-RFL-Siglent-Solartron-Tektronix-Thurlby-Time Electronics-TTi-UniT
 

Offline woofy

  • Frequent Contributor
  • **
  • Posts: 346
  • Country: gb
    • Woofys Place
Waveform function collapse does not generate "new" information; it merely reveals existing but previously hidden information

It's doubtful there is 'hidden' information because of Bell's Theorem and the issue of non-locality being required to conform to quantum mechanics - which I think the OP is getting at. Not the creation of 'new' information, but the encoding of information when entangled particles are created and then separated.

switchabl's explanation is correct in referencing the No-Communication Theorem.

In my goofier imaginations, I've wondered if a resolution to the apparent paradox is that light speed really is anisotropic and the entangled particles DO change instantaneously... in one direction, but it all averages out to speed c in the wash whenever anyone tries to confirm the transmission thus still obeying relativity which is built around the two-way speed of light.  ::)

I was trying to make it as simple and understandable as possible.

The entangled particles do not change, at all. The "encoding of information" happens at the point of entanglement, not at the point of waveform collapse. Any imposition upon an entangled particle after separation breaks the entanglement.

I agree, but most scientists would not as it seems to imply hidden variables.
A pair of gloves, split between two boxes and sent on their way. Open one box and you instantly know the state of the other one. No mystery, no spooky action. It's an analogy, but then I've never seen a convincing explanation of Bell's inequality that wasn't also full of analogies.

Offline HuronKing

  • Regular Contributor
  • *
  • Posts: 237
  • Country: us
Waveform function collapse does not generate "new" information; it merely reveals existing but previously hidden information

It's doubtful there is 'hidden' information because of Bell's Theorem and the issue of non-locality being required to conform to quantum mechanics - which I think the OP is getting at. Not the creation of 'new' information, but the encoding of information when entangled particles are created and then separated.

switchabl's explanation is correct in referencing the No-Communication Theorem.

In my goofier imaginations, I've wondered if a resolution to the apparent paradox is that light speed really is anisotropic and the entangled particles DO change instantaneously... in one direction, but it all averages out to speed c in the wash whenever anyone tries to confirm the transmission thus still obeying relativity which is built around the two-way speed of light.  ::)

I was trying to make it as simple and understandable as possible.

The entangled particles do not change, at all. The "encoding of information" happens at the point of entanglement, not at the point of waveform collapse. Any imposition upon an entangled particle after separation breaks the entanglement.

I agree, but most scientists would not as it seems to imply hidden variables.
A pair of gloves, split between two boxes and sent on their way. Open one box and you instantly know the state of the other one. No mystery, no spooky action. It's an analogy, but then I've never seen a convincing explanation of Bell's inequality that wasn't also full of analogies.

This is probably the best explanation on YT of the apparent paradox and doesn't use many analogies but breaks down the math of the experiments themselves. Unfortunately, photons are more complex than gloves  :D


 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6469
  • Country: fi
    • My home page and email address
Unfortunately, photons are more complex than gloves  :D
:D

That is so funny, but apt at the same time.  We fundamentally understand how gloves work: we don't even need to think about it.  But the way photons work, really throws a normal sane human mind into a loop.  There just isn't an intuitive analog that captures even half of it, because they are that different to stuff happening in the human scale.  It applies to all quantum-scale phenomena, too.  And while "complex" is definitely the wrong word for the quip to be exactly right, it is an excellent example of how we don't even have intuitive terms to describe this yet!  The only word that comes to my mind here is "weird", but it's even less descriptive!

This is also why I like analogous approximations like "wave-particle duality".  No, it is not real: the physical world doesn't have that duality; it is just an approximate description of the actual behaviour that can best be described in math.  But, as an approximate description, it allows one of the weirdest? most complex? hardest to grasp? features at the quantum scale to be understandable for a "common sense" mind, by combining a pair of analogs.  To me, these are the equivalents of parables of human behaviour and life, except for the scientific realm: not to be taken as the absolute truth or exact description, but as a tool to gain intuitive understanding, in a form you can examine and build more understanding on top of, without having to treat is as a religious fact that is not to be questioned.

(It is also why I don't really like participating in physics-related discussions, even though I still have all the relevant textbooks starting from Brehm–Mullin onwards I could check for details.  Unless one is a researcher on the matter, I believe an intuitive understanding of when various models and analogs are applicable, is more important than managing the math or remembering the model details.  To some physicists this is semi-offensive, because they've spent literal years to learn and understand the actual models and physics behind them.  Especially analogs like wave-particle duality are ... well, demeaning/overly simple/not correct enough; and sometimes considered a tool for only those that cannot understand the real weird actual physics it is based on.  I lean more towards synthesis myself, and feel like opening up those analogs for engineers and others to use, is like applying game theory to old parables: it tends to result in surprising advances in the sort of intuitive understanding that lets people create new, better, more interesting stuff.  (Game theory + old parables = actual model of enlightened self-interest, for example, which tickles me pink.)  But, for all my verbosity, my previous attempts at this haven't fared well, the underlying idea not carried over or received well enough; so I shall desist.)
« Last Edit: May 18, 2024, 03:58:31 am by Nominal Animal »
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
Unfortunately, photons are more complex than gloves  :D

Gloves certainly have some photons, too, beside other stuff, which makes gloves way more complex than a photon.  :P

Offline m k

  • Super Contributor
  • ***
  • Posts: 2165
  • Country: fi
Wave function is not real.

EM waves are mathematical models of statistical appearance of photon interactions.
And not real.

Photon is also less than real, but clearly more than nothing.
So a quantum object.
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
If the photon is not real, then what is real?  Define "real".

Photons have a physical manifestation, therefore they are as real as anything else in this Universe.  Photons make a lot of sense when thought of as small packets of waves.  Undulations in an omnipresent electromagnetic field.

Offline m k

  • Super Contributor
  • ***
  • Posts: 2165
  • Country: fi
The measurement is real.
Effect of energy is real.

Lot of sense is not very accurate, nor is collapse of a wave.

We know that a photon happens, but we cannot know anything about the photon.
Reread what NA wrote.

Before photon interacts it can't be that point like object.
When it interacts it can't be that wave like object.
How it interacts with a slit if it is a quanta.
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
You didn't say what "real" means.  Instead, you give 2 examples:
1. a measurement is real
2. effect of energy is real

To me, "real" means that it has a standalone and physical existence, something that is independent of my will, and independent of my thinking.

By my definition, a measurement is not real.  A measurement is a comparison made against a reference unit.  That is not real.

A measurement is a human thing, done at my will, and the reference unit is also according to a human agreement.  Nature does not measure.  The second example of yours fits my definition, the effect of energy will be the same, independent of my will.

Can't comment any further, unless we establish a common set of terms and definitions.

Not saying you are wrong, only saying that according to my definition, one of the examples you gave do not match my definition.  It's OK for words to have different meaning from different people, but we need to identify those differences, or else we won't be able to communicate with each other.

So what is your definition for "real"?
« Last Edit: May 18, 2024, 01:04:50 pm by RoGeorge »
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2165
  • Country: fi
I meant a fundamental measurement, a change of state of measuring apparatus.
An interaction of photon and electron is a change of state.

A change from nothing has happened to something has happened is real.
Fermions are real, they can be measured.

Photon is not unreal, it's overall reality is less that 1 but more than 0.
It's a quantum object.

How is a wave from distant galaxy collapsing to scientist's eye.
The name is incomplete, complete name is a probability wave.
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
What do you mean by fundamental measurement?  To measure, means to compare with a reference.  What is the difference between a measurement and a fundamental measurement?  From the rest of of the text, I suspect that by "fundamental measurement" you mean counting events, right?  But I'm not sure.

Anyway, I can not follow the line of thoughts from the rest of the text, I am not sure what you mean by photon not unreal, less than 1 more than 0, or if the line about wave function collapse is a question or an affirmation.

You seem to be taking the Copenhagen interpretation as established facts, but I'm not convinced that that interpretation is correct.  I'm temped to reject it, but I don't have a definitive proof that the Copenhagen interpretation is wrong.



To return the topic question, no, information can not be transferred instantly.

There seems to be a direct relation between information and energy, similar with how there is a formula between mass and energy.  This suggests information can be transformed into energy, and energy transformed into information, just like mass can be transformed into energy, or energy into mass.

So, if we accept the transitivity of mass ~= energy ~= information, and the fact that mass or energy can not be transferred faster than light (FTL), let alone instantaneously transferred, this means information can not go FTL either.

If it were to be otherwise, then one would be able transfer energy FTL by converting that energy into information first, then transmit the information instantly, and convert it back to energy at the receiving point.  And if you put the receiver and the transmitter at the same point, you would instantly double the energy out of nothing.  Which contradicts physics, because it is not possible to build over-unity devices.

I'm not sure if reductio ad absurdum based on physics is a valid demonstration as if it were a math demonstration based on reduction ad absurdum, but if it is accepted, then no matter which method one would try to transfer info FTL, entanglement or something else, it would still remain an impossibility.



But then, if it is impossible to create something out of nothing, then where from is it all this "stuff" around us?  ???
Why is it there something, rather than nothing?



Let's notice the words "where from" (which assumes only a transfer would be possible, but not creation out of nothing) so it might be a malformed question.  And in the second question the word "why", which assumes causality.

There are so many tacit assumption embedded in the way we are thinking.  Will we ever be able to dissembed ourselves out of that?

But then, how does the mind works, what does it means to understand, what does it means to be aware, and how only a very narrow slice of reality (only the thin slice revealed by our sensory inputs) made us act, and think, and be what we are?  I guess it would be an impossibility to "disembed" ourselves from all the past experiences, and from all the tacit assumptions, such that we would be able to think in an absolute manner, totally abstract.  I have a pet theory about that, too, but enough rambling for one post.  ;D
« Last Edit: May 18, 2024, 05:23:52 pm by RoGeorge »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6469
  • Country: fi
    • My home page and email address
There are so many tacit assumption embedded in the way we are thinking.  Will we ever be able to dissembed ourselves out of that?
It depends on who you refer to by "we".

Analytical people who naturally gravitate towards scientific approach?  Or the people who don't have the interest to consider such things, and instead treat our technology as it was magick?  Who reject the scientific method because it was mostly discovered by males, and therefore suspect and oppressive?  Or because their religion has all the truth they ever need?

Not all humans are suitable for scientific work and engineering.  It is part of the success of the human race: we're so varied that given sufficient population base, we can find sufficient individuals to do any task.  However, the push towards unity, equity, and uniformity seems to completely ignore that, and force everyone to the same mold.  I am pretty convinced that as a species, we're driving ourselves towards a behavioural sink: going the Universe 25 way.
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2165
  • Country: fi
Fundamental measurement is a binary thing, nothing is no measurements, something is a measurement, quality is irrelevant.
It happens when neutrino and first part of a neutrino detector interacts.
The rest of a measurement is forming it to apparatus controller acceptable formation.

With laser one can think that the probability of a beam is pretty high, I can accept that, actually quite easily.
But a probability wave of every photon is still ball symmetrical.
Combination of time and phase constructs the beam.

A probability part of the wave can't carry information.
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline EPAIII

  • Super Contributor
  • ***
  • Posts: 1100
  • Country: us
I prefer to look at it in a simple way. Two particles fly in different directions. Together they have an average characteristic A-B but we don't know which is A and which is B until the wave function collapses.

So someone observes one of the particles and finds it to have one of these two characteristics. The wave function then collapses and one has characteristic A and the other has characteristic B. However, the first observer has no way of CONTROLLING if the particle he observes (causing the wave function to collapse by that act of observation) will have characteristic A or characteristic B. That is fully random and only known AFTER the wave function collapse has occurred. It can not be decided upon BEFORE that collapse (like a telegraph operator deliberately sending a dot or a dash) so it can not be used to control that collapse or transmit any information to an observer who subsequently observes the other, far distant particle. Our cosmic observer is only "transmitting" TRULY RANDOM dots and dashes. No information there.

It's kind of like the particles decide which will be which, not the two observers. And they only do so as the collapse occurs with no control by the observer.

Edited 5/20/24 to correct typos ("of" changed to "or" and "dasher" changed to "dashes").
« Last Edit: May 20, 2024, 07:42:18 am by EPAIII »
Paul A.  -   SE Texas
And if you look REAL close at an analog signal,
You will find that it has discrete steps.
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
There are so many tacit assumption embedded in the way we are thinking.  Will we ever be able to dissembed ourselves out of that?
It depends on who you refer to by "we".

Analytical people who naturally gravitate towards scientific approach?  Or the people who don't have the interest to consider such things, and instead treat our technology as it was magick?  Who reject the scientific method because it was mostly discovered by males, and therefore suspect and oppressive?  Or because their religion has all the truth they ever need?

Not all humans are suitable for scientific work and engineering.  It is part of the success of the human race: we're so varied that given sufficient population base, we can find sufficient individuals to do any task.  However, the push towards unity, equity, and uniformity seems to completely ignore that, and force everyone to the same mold.  I am pretty convinced that as a species, we're driving ourselves towards a behavioural sink: going the Universe 25 way.

To whoever is complaining most science was made by men, tell them all the people were made by women.  Women made all the people ever.  How about that?  :)


Back to the question about abstract thinking, by "we" I meant a human mind, in general.  Now, that you made me think how much of that "we" can be extended across various types of minds, I guess no mind can totally detach itself from its former training and experiences, such that it would be able to think in a genuinely abstract and unbiased way.  Not even an artificial mind would be able to do that.

The most abstract think I know is mathematics.  But even a mathematical proof starts from set of axioms.  Change one axiom, and suddenly the sum of angles is no longer 180.  Isn't that just like human bias?  In the sense that the outcome may be different, because we have started from a different set of axioms?

Similar happens with the human thinking.  Just like in math there are axioms, humans do have knowledge "tokens" (heard the term "token" in some GPT 101 video, and borrowing it because the human mind works about the same, IMO).  In contrast with math axioms, the tokens of the human mind are very numerous.  They start to form by themselves, by classifying the data streams coming from our sensory input.  The most repeated patterns become categories, they become the "tokens" in terms of which we think.  This learning is mostly by observing, and later by interacting with the world, the tokens we discover ourselves, before school.  Later these tokens become even more distilled, better classified, more tokens from social interaction, etc.

If it is easy to see how a single difference in one axiom can create very different results, then a human mind, which uses tokens instead of axioms, can produce all kinds of results from different individuals.  There is one more difference:  while in math the noise is zero, the human mind is very noisy, so even more unpredictability.

I think the people characterized as more creative, are those with a noisier brain.  I think new ideas are, more or less, caused by noise added to exact reasoning.  Sometimes context can be just as noise, in the sense that the surrounding world can distract us in a way that perturbs the reasoning just like noise.  Exaggerated example here, apple falls on your head, and bang, you realize who knows what.  Artificial neural network can not do that naturally.  GPT adds intentional randomness to the result, or else it will always generate the same answer.


No idea if what I've wrote makes any sense.  It's not a pet theory I've come up with overnight, I keep thinking about it, seeking for arguments while trying to avoid cherry-picking, and keep patching whatever contradictions I might find in it.  It started somewhere in the late 80's, when for some reason, I wanted to simulate "life".  I mean, a world of interactive sprites (the set of pixel seen in the early 8bit video games).

Initial idea was to make them interact, and observe their behavior.  That was before YouTube, in BASIC, on a ZX Spectrum with an 8bit Z80/3.5MHz, and only 32k.

At that point I didn't know about genetic algorithms or AI.  I never finished that as envisioned, but I did found some interesting result, the most memorable was that life needs a certain level of ramdomness in order to thrive.  To less and you get a repetitive, or even worst a frozen world, too much randomness and it all disintegrates into noise.

Later the IBM PC XT arrived, but never repeated the experimented on a bigger world.  Then later the Internet took flight, and found about genetic algorithms and various types of neural networks.  Never studied any of these in a rigorous way, but tinkered with them here and there.

Since then, I keep noticing similarities between artificial NN, and how the brain works.


Rambling too much already, to sum it up (in a disorganized way):
- learning is identifying repeating patterns
- we construct our internal tokens according to what we were exposed to the most often
- out of all that is out there, we sense only a very narrow slice
- tokens are the building blocks of our inner represention of the outside world
- tokens are somehow similar with the axioms in math, as in building blocks
- main difference from axioms is that the inner tokens are keep morphing over our entire life, they are fuzzy, and not fixed/frozen like an axiom
- tokens can be used to simulate, or to predict an outcome (talking about the continuous inner prediction, the simulation we do in an automated way.  Once we accumulate enough tokens and enough experience, the brain does not wait for the sensory datastream, it starts to predict it, it starts to simulate the most probable outcome in advance.  What we think we perceive and see, our awareness of the surroundings, is mostly faked by brain predictions.  That is why one can look and didn't see the car, because the brain didn't anticipated there might be a car coming from there.
- to understand something means to have enough tokens such that you are able to simulate the outcome in your mind.  If I throw you an object, you can catch it without writing the parabola equation.  You can visualize in your mind the trajectory, which means you understand.  If you have a formula and need to calculate something, that is knowing, not understanding.
- other animals are self aware, too, they can understand and simulate/predict the outcome in advance, too, the more neurons they have, the more awareness
- human behavior is not well determined, it is influenced a lot by randomness, both internal noise, and external unexpected influences
- and the most scary of all, the process of adjusting and distilling our inner tokens never stops.  It happens whether we want it or not.  We can not "pause" learning.  Which means we are all susceptible to brainwashing and propaganda, even when we know it's propaganda.  If a stimuli is repeated over and over, eventually it will alter your existing tokens, maybe form new ones, too.  This is how I explain the saying "A lie repeated a thousand times become truth".

The funniest aspect of the last one, is that one can self-brainwash accidentally, simply by repeated exposure.  ;D

Sometimes this is intentional, as in sticking self-motivational posters around the room, but more often it happens as an unintended side effect of spending too much time in a wrong circle of friends, or on bad social networks, etc.
« Last Edit: May 19, 2024, 02:37:09 pm by RoGeorge »
 
The following users thanked this post: RJSV

Offline vad

  • Frequent Contributor
  • **
  • Posts: 463
  • Country: us
Fundamental measurement is a binary thing, nothing is no measurements, something is a measurement, quality is irrelevant.
It happens when neutrino and first part of a neutrino detector interacts.
The rest of a measurement is forming it to apparatus controller acceptable formation.

With laser one can think that the probability of a beam is pretty high, I can accept that, actually quite easily.
But a probability wave of every photon is still ball symmetrical.
Combination of time and phase constructs the beam.

A probability part of the wave can't carry information.

What if the Simulation Hypothesis is correct and we live in a computer simulation, does it make the world around us, including the photons, the measuring apparatus, the observer who observes the apparatus, real?
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2165
  • Country: fi
Outside of the universe is not well defined.

But every day life can be easily defined as not real.
Simply because nothing is actually touching anything.

Maybe everything is just ripples on the surface of nothingness.
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 14705
  • Country: fr
What is nothing?
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6469
  • Country: fi
    • My home page and email address
I know I shouldn't try, but.. I guess I'm a glutton for punishment.

The concepts of "wavefunction" and "wavefunction collapse" (or "quantum state collapse") themselves are analogs of the mathematical properties we do know do apply.

For example, in the double-slit experiment, because the interference pattern occurs even when individual quanta are passed through –– easily implemented with photons and electrons, but applies to all quanta –– we know that each individual quanta passes through both slits.  That in fact, it is what we call the wavefunction that passes through the slits.  It is not strictly speaking a probability distribution: although the squared wavefunction amplitude correlates to the statistical distribution of the measurables, due to the interference in the individual quanta double-slit experiment, it does have to pass through both slits at the same time.

If we add any kind of interaction that determines which slit a quanta passed through, the interference vanishes.  This is what we call a wavefunction collapse.

The key thing to realize is that in the default double-slit experiment, the quanta interacts with the double slit as a wavefunction: it really passes through both slits.  When we modify the interaction so that the quanta is localized to either slit passing through, the interference vanishes, and the wavefunction "collapses" to a particle (according to the probability distribution of the wavefunction magnitude squared).

(In general "wavefunction collapse" or "quantum state collapse" refers to an interaction which causes one or more properties to "collapse" to a real measurable value, with probabilities described by the square of the wavefunction amplitude.)

We have no fucking idea whether wave functions are real, whether wavefunction collapse is real, or whether these are just "side effects" of the mathematical representation we currently use for these.  Really.

We do know that the mathematical models work so well that we haven't been able to "break" them yet.  But they don't tell us what or why, only how.



Let's consider the situation where a photon hits some solid matter.

In most cases, the photon hits one of the outer electrons bound to an atom.  Mathematically, the interaction is not between two point-like objects that pick a random location and properties according to their wave function, but between the two wave functions directly.  Now, if you look at it statistically, they seem to be the same thing, but they are not: just like in the double-slit experiment, you get "extra" effects compared to the statistical-particle-approach.

When a photon hits a regular lattice or a large molecule with many electrons bound to the lattice or molecule instead of particular electrons in it, the photon essentially interacts with all those electrons at once –– even though usually a single electron will change state due to the added energy.  (Again, you could model this statistically assuming the photon interacts with a random electron, but things like inverse Compton scattering, where the photon gains energy and the electron(s) lose energy, and in general how the properties of the electron state change reflect the entire set of electrons and not that particular electron, indicates it is a simplification, and to capture the phenomena correctly, one must look at how the wavefunctions interact –– and not just how "collapsed wavefunctions" could interact.)

When two electrons interact, for example when you have two neutral hydrogen atoms approaching each other, the bound electrons do not interact as point-like particles: to model the interaction correctly, you must integrate the charge density according to the square of the wavefunction amplitudes –– as if you considered all possible electron-electron pairs over an empty universe except for the two, and calculated the overall probability density.  However, the statistical interpretation is not exactly correct, because the end result will still act like a wavefunction in double-slit type experiments.  In other words, such interaction does not cause a "wavefunction collapse".

Now, if you start asking "okay, exactly what does cause a wavefunction collapse", you get into the la-la land, into Wigner's friend the Ultimate Observer and such.  We don't know, and we cannot know until we understand what the fuck the mathematical construct we call a wavefunction corresponds to in reality, and what wavefunction collapse is, if it is not just a mathematical construct.

Thing is, those of us who are only interested in applying these things to create interesting stuff (like quantum dots to LEDs, solar cells, transistors, et cetera), don't need to understand what; only how.
To some, this is extremely easy, and to others, extremely hard.  It is like considering in mutually exclusive statements, and not rejecting any of them, but forming an opinion weighted by their instant probabilities from moment to moment, situation to situation.  Easy to some, impossible for others.
Analogs and simplifications very often suffice to get stuff done, although one must understand they are just that, and not over-extend them; which is why the first thing anyone applying physical models to solve a problem should ask after getting some results is "does this make any sense?"

Indeed, in my opinion, true physics doesn't provide any human-scale/human-understandable/intuitive explanations of what or why at all, only how, and that via mathematical models; math being the most precise language to express such things in.



There is an aphorism, "Perfect is the enemy of good", reflecting that striving for perfection often prevents implementation of smaller improvements; and that since perfectness is rarely achievable, effort is wasted and improvements lost.

My own attitude towards physics is similar.  I like philosophical ponderings at least as much as the next person (especially along the von Neumann-Wigner interpretation), but to me, they are human philosophy and not physics.  When it comes to science and engineering, better modeling of the ways we can interact with the measurable reality is what matters to me; not which authority or Big Name you follow, or whose ideas you like best.  Unfortunately, even physics discussions often devolve into just that, and in my opinion, it is a pure waste of time, when one could discuss how known physics could be utilized if we had enough energy and sufficiently high tech gadgets at hand.

As to the question in the title, the answer is that "according to the best current models of measurable reality, no; instant information transfer is not possible".  Others have expanded on the details, but the key point is that everything we currently know about the properties of "wavefunction collapse" indicates it cannot transfer any information: you cannot "force" the collapse to occur in a specific way, so that another party measuring its entangled partner would make a specific measurement.

Think of it this way: you have a pair of (almost infinite-faced) magic dice, that you know will show the same faces next time they're thrown, and they can only be thrown that one time.  You give one to someone moving far, far away.  Can you use them to communicate anything?  No, because you cannot even tell which one of you threw the dice first.  You cannot tell if the other side has thrown the dice yet, because you need to throw it yourself to find out.  You cannot "peek" at the face either, because that counts as a throw too.  Everything we know thus far about such situations in physics says it is impossible to convey any message, not even a single bit, with such magic dice.  Or entangled particles.
(Here, "throwing" corresponds to the wavefunction collapse between the entangled pair, and the face corresponds to the entangled physical measurable, like spin.)

But again, that is just an analog; the best one that I can construct based on the mathematical description of entanglement.  So, you can poke holes in it, easy.  To me, its purpose is to give a rough intuitive understanding of the limitations here, not a precise one, because the rough one suffices for purposes of making new interesting stuff.  Researchers and scientists, you need to look at the math instead.
 
The following users thanked this post: RoGeorge

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
I know I shouldn't try
...

No, you should try more often.
Much appreciated, and thank you for taking the time to write it.

Offline woofy

  • Frequent Contributor
  • **
  • Posts: 346
  • Country: gb
    • Woofys Place
If we create a pair of entangled particles and send them on their way, then current thinking suggests that they share properties such as spin, until they are measured we don't know which is spin up and which is spin down. Indeed they are both at the same time.
Suppose we measure particle A and find it is spin up, we know that when we measure particle B it will be spin down. Having communicated with particle A faster than the speed of light particle B is now spin down. B carries on for a bit and is eventually measured where we confirm it is spin down.

Now what has happened. B not only had an unknown state but it was both at once until A was measured, then it became spin down as the "wave function" collapsed, then carried on as spin down. What was the difference between the B before A was measured and after? Has it somehow changed? Is there a hidden variable in there we cannot see but determines that it will be measured as spin down?

If so then why not hidden variables in the first place and avoid all this spooky action FTL stuff?

The quantum world really screws with your head.

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
[to what] the mathematical construct we call a wavefunction corresponds to in reality


About using math to prove physics hypotheses, or about using math to understand physics, I don't think it is possible at all.

(Let's put aside for a while the "understanding" part.  Understanding means to be able to simulate an outcome in your mind, which usually happens after some training and interaction with the new thing, such that the brain can develop an intuition about that new thing.  Math helps with understanding, but it is not enough.  You (you means anybody) also need to experiment and practice with the new thing, otherwise you think you understood, when in fact you only know, but didn't understood yet.)

Saying math can not prove physics, because math came into existence starting from the physical world, starting from how it manifests.  We wanted to count the sheep, and came out with algebra, wanted to measure the land, and came out with geometry.

The rules and axioms in math were distilled starting from the properties of the physical world, and from how it behaves.  Later, we've noticed (we the humans) that algebra/geometry/calculus and who knows what other branch of mathematics are equivalent.  Either of these math domains can be used to solve the same problem.  I think they (math branches) are all equivalent, because they came into existence for describing the same thing, our physical world.

I think the biggest advantage of mathematics (when compared to other analysis methods), comes from the fact that math only included rules that were based on physics laws, math didn't include any human beliefs, or social norms in it.

Now, if math was made according to the physical world, then trying to use math to prove physics would be just like a circular definition:  Physics is like it is, because math validates it, and math is like it is because we made math according to physics.

Mathematics is very useful in physics (not asking to ban math), but I don't think math can be used to prove physics.



Another thing regarding math limitations, math came into existence based only a narrow slice of the entire physics.

We are aware only of a small part of all the manifestations of the physics laws.  We keep extending, and keep deducing what we can not observe, we keep extending both the math and the physics, but the fact that we've only started from a small slice of the universe when inventing math, might also be an indicator that math is incomplete, and that there might physics laws that requires a new and more complete mathematics we do not have yet.

Math is only a representation of physics, and most probably an incomplete one.  There is still hope for math to be a complete representation already, but this would be true only if physics is indeed reducible at a small number of fundamental laws, and if we were lucky enough that all of those fundamental laws were already represented here on Earth, while we were counting sheep and measuring land and distilling math axioms.



On the same line of thought (about what axioms do we use in math deduction, or about what tokens do we use in human thinking), it is expected to arrive to strange, or contradictory, or even wrong conclusions, when adding ad-hoc axioms (like consciousness of an observer, social norms, religion, etc.).

Same kind of useless results (as in not verifiable, and with no technological application), I see when extra dimensions are added for no apparent reason (like in string theory), or when laws of physics are postulated as irreducible, claiming that all is a big mess and all the order we see is just a coincidence so stop looking for the fundamental laws of the universe and just take it as it is.

Another such (rather damaging) idea is that the universe is like a video game, a simulation, which idea is a modernized of the "brain in a vat", which is a modernized Plato's cave.  At least with Plato's cave there was a moral in it, but in the simulated universe there's only a shroud, yeah could or could be not.
« Last Edit: May 20, 2024, 11:13:25 am by RoGeorge »
 

Offline Nominal Animal

  • Super Contributor
  • ***
  • Posts: 6469
  • Country: fi
    • My home page and email address
The quantum world really screws with your head.
It definitely does.  For me, trying to truly understand it is like trying to understand fine dining etiquette as a cat or a dog.  Sure, one can learn the rules, but the dog does it because it wants to follow the/your rules, and the cat because, uh, it decided to see what the fuss is about; the concepts of "fine dining" and "etiquette" are simply utterly irrelevant and untranslatable to them.

Sometimes it is just better to inhale the kibble and go take a nap.

About using math to prove physics hypotheses, or about using math to understand physics, I don't think it is possible at all.
For the reasons I outlined above, I really don't have an opinion on such at all.

I've fully accepted I will never know the "true reality" –– but would love to, oh so much! :P –– and am satisfied to try and not add to the confusion (maybe even contribute a tiny little bit?), so that perhaps, just perhaps, someday somewhere someone might.
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
Is there a hidden variable in there we cannot see but determines that it will be measured as spin down?

If so then why not hidden variables in the first place and avoid all this spooky action FTL stuff?

That is a question debated since the very beginning of quantum mechanics, all physicist and not only them, tried to figure out the answer.  Meanwhile 100 years have passed, and it is still unclear what is happening.  ;D

At some point, somebody came with an experiment idea that was supposed to settle the debate.  It was about observing some probabilities, where a mathematical inequality or equality, was suppose to give a definitive answer, so the name "Bell inequality".  Experiment was performed, the inequality was there, which was suppose to mean "it's not like the gloves", no hidden variable.

All good so far, just that the experiment was not easy to follow, and might have included unintended wrong assumptions.  Many pointed out the possible inconsistencies, both in math and in the experimental results, and that was called "patching the loopholes".  After a wile some physicists claimed all the loopholes were addressed, and no doubt was left.

Some years ago, in one of the decisive experiments, the inequality was to be observed eventually by plotting a chart with the results.  If the plotted line were to be with straight segments (like this /\) then it's a hidden variable like the pair of gloves idea.  If instead of straight segments, the plot were to be curved like a bell shape, then it was all spooki and no gloves.

Well, the plot from that experiment was curved, so no hidden variable.  In 2022 the Physics Nobel Prize was related to the Bell experiment.

Some are still not happy with that conclusion.  For example I think there are still other reasons that could have curved those lines, more exactly the detection noise.  I do not have any training in physics, so it doesn't weight much what I believe, but there are others not yet convinced, too, and some of them are physicist claiming they spotted inconsistencies.

My main argument for not believing the curve shaped of that plot is a definitive evidence doesn't even come from the physicists, but from this paper:

A Classical System for Producing “Quantum Correlations” by Robert H. McEachern
https://vixra.org/abs/1609.0129

There are some followups to that paper, that illustrate how the same results observed in the Bell test might be produced by randomness alone (or else said caused by the detection noise), and not as a proof for absence of hidden variables:  https://vixra.org/author/robert_h_mceachern

The paper is not mine, the author is not a physicist (the author is astrophysicist by profession, IIRC), and the paper is self-published on viXra (not arXiv).  However, the paper made all the sense to me, and I took it as a solid disproof of the Bell test interpretation, because it shows a counterexample.  No matter how watertight a demonstration would be, a counterexample invalidates it.

« Last Edit: May 20, 2024, 01:41:20 pm by RoGeorge »
 

Offline HuronKing

  • Regular Contributor
  • *
  • Posts: 237
  • Country: us
Unfortunately, photons are more complex than gloves  :D
:D
 And while "complex" is definitely the wrong word for the quip to be exactly right, it is an excellent example of how we don't even have intuitive terms to describe this yet!  The only word that comes to my mind here is "weird", but it's even less descriptive!

It's doubly funny because "complex" is exactly the word and pun I wanted to make. I don't have time to catch up on all the discussion that's happened (I see you've written another long and insightful post) so forgive me if I retread old ground.

But for me, what the quantum entanglement paradox suggests is how important understanding sampling is to the outcome of the experiments. But, not in a "the experimenter affects the measurement" kind of way - like say, touching a material to find out how hot it is will change the temperature of the material, because you're interacting with it with the temperature of your own measurement device (your hand).

No, the results are more 'fundamental' than that (Heisenberg's Uncertainty is more than just Observer Effect, it's a law of nature) and it requires a firm and deep grasp of "complex analysis" to understand what's going on. And, I think it's more intuitive than we give it credit for - once you're willing to accept Fourier Analysis.  ;D

These two videos are my favorites on the topic:




I didn't understand this connection to undergraduate electrical engineering mathematics until I was well into my career and decided to learn some quantum electrodynamics and watched a lecture where Feynman seemed irritated at a line-of-questioning from a student about wave-particle duality, and Feynman kept saying,
"No, a photon is a particle, it's a particle, stop saying it's a wave, no, it's a particle..."

And seeing quantum physics, and its built-in 'weirdness,' as just Fourier Transforms being done on particles. It also led me to see how obvious it should be that there are no-hidden variables - if we accept Fourier Analysis as correct. I sleep much more easily at night.  :) :=\
« Last Edit: May 20, 2024, 08:11:47 pm by HuronKing »
 

Offline HuronKing

  • Regular Contributor
  • *
  • Posts: 237
  • Country: us
I'll have to watch this one later - came out just last year and the YT comments seem to imply it's better than either of the two previous videos!


I'm emphasizing this because I think it is much, much harder to grasp why quantum mechanics has to be the way that it is without grasping Fourier Analysis.
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
if we accept Fourier Analysis as correct

I'm not very sure what Fourier Analysis is.  If it is about Fourier Transform, yes, the FT is correct in a mathematical way.  In physics, however, it is not that easy to say when FT results make sense or not.  In order for FT to work, it needs the concept of infinity.  In the real universe, however, everything seem to be bounded in a way or another.  Infinite is not real.

Even if it were for infinite to be real and applicable in physics (real as in heaving examples in the physical universe, examples independent of our own thought and imagination), it still remains the problem of mapping from physics to math, and back to physics after some math processing.  That is why we always have to check if math results make sense, then have to validate them experimentally.  Not saying the encoding/decoding between physics and math was incorrect, saying only that it is tricky, and that we do not have any known tool to prove the encoding/decoding to/from math was made correctly, other then double checking the results experimentally.

There are more traps with mathematics.  It doesn't have any axioms related to causality, or to time.  Causality and time are present everywhere in physics, but in math they are absent.  These two treats, causality and time, are left to be represented by our skills in encoding/decoding a problem between the physics domain and the math domain.  Very prone to error.

Mathematicians have no problem operating with infinity, having some infinities bigger than others, and so on.  Math is all about building an entire world on a set of axioms, but the axioms can be changed.  In contrast with math axioms, the physics laws can not be changed, or selected upon our wish.

At first, the math axioms were deduce from the physical observations alone.  Then the axioms were modified, or extended, giving birth to different worlds, some results were contradictory (e.g. sum of angles in a triangle is not mandatory 180), but they were coming from different sets of axioms, so no problem, mathematically it is correct.  With time, the set of axioms were changed, either to cover more of the physical world, or to allow more advance in math.  In time, math became a world in itself.  At first, all the math results were about the physicality of our world, but not any more.  1:1 mapping between math and physics was long ago.  Now, a math result may or may not be applicable back to physics.

Math is great when applied with care, and when the conclusions are validated experimentally.  But math is not evidence.  My point is, the FT can work great and be correct mathematically, yet entanglement can no more than synchronized waves. 



More arguments for entanglement as being synchronized waves, entanglement does not lasts forever, as suggested in all popular science.  That would be only in an ideal situation.  In practice the decoherence happens rather fast, just like two oscillators that were once synchronized then separated and let to run freely.  This is the main reason why quantum computers in practice only have a handful of qubits.  The wave synchronization (of whatever quantum object is used as a qubit) is perturbed, the fancy word is decoherence, or disentangling.  This is no different than oscillators getting out of sync.  Yes, they qubits are really small and easy to be perturbed and put out of sync.

Another hint that entanglement has nothing weird, is that entanglement happens only when the particles are put in close vicinity (or even born together).  It is known that oscillators that are close enough that they can interact, tend to synchronize, and get in lock with each other.  It happens at macroscopic scale, too.  In EE for example, if you try to do intermodulation distortions, you might sometimes need an isolator between the two generators when summing their waveforms.  If they can exchange energy and if they are of a close frequency, the two instruments will tend to get in sync and lock to a single frequency (which is unwanted in this case).

I do not know why the spin pairs, best guess is that is because that's the state with minimum energy, thus the most stable and the most probable to be encountered, but I don't have an understanding of spin (understanding as in growing an intuition about it).



The puzzling property to me is not entanglement, but quantization.  Why something that is very small can only exist as a certain "chunk", called quanta?  This is a property that seems to be present in everything from the physical world, as long as it is very small.  I suspect it is related with the fact that infinity can not exist in the physical world.

Side note, similar with infinite, the concept of "nothing" is also not real, as in not present in the physical world.  Just like the infinite, nothingness is an imaginary construct.  Nothingness can not exist in the physical world.

Talking about nothingness being only an imaginary construct and not real, because it is related with infinite being imaginary, and because it was asked in this thread a page ago:
What is nothing?
« Last Edit: May 21, 2024, 06:46:09 am by RoGeorge »
 

Offline RoGeorge

  • Super Contributor
  • ***
  • Posts: 6414
  • Country: ro
watched a lecture where Feynman seemed irritated at a line-of-questioning from a student about wave-particle duality, and Feynman kept saying,
"No, a photon is a particle, it's a particle, stop saying it's a wave, no, it's a particle..."

Feynman irritated at a student is no surprise.  :)  He was kind of a jerk in real life.  Smart, yes, but arrogant even for his times, and unpleasant to work with.  Not my opinion, only repeating what I've seen on camera, what were saying other physicists that worked directly with Feynman.  He was not exactly the nice and charismatic character, as pictured in the YT interviews with Feynman talking about the beauty of a flower.

Was that lecture, by any chance, about the many paths interpretation (Feynman's Path Integral)?

That interpretation requires particles, and it assumes the particle somehow follow all the possible wiggling trajectories, an infinity of them.  Why would a photon do that?  The many path idea was not originated by Feynman, but somehow his name remained associated with the many path interpetation, same like E=mc2 was published in physics journal 2 years before Einstein published his E=mc2 ( https://www.scientificamerican.com/article/was-einstein-the-first-to-invent-e-mc2/ ), yet somehow the formula remained associated only with Einstein's name.

Feynman have had some contribution to the many path idea, even if the paternity was not entirely his, and the many path has some application (IIRC the integral path is the nicest explanation for the backskattering).  Though, the many path is not real, the photon does not wiggle through the entire universe, taking all the possible paths before hitting a screen 2 meters away from the light source.  That interpretation is imaginary and in math only, and the photons are not particles.  I think photons are waves, small packets of undulations.

TL;DR, I guess intimidating a student was to defend the many paths interpretation, I don't think Feynman didn't consider photons as waves.
« Last Edit: May 21, 2024, 09:22:37 am by RoGeorge »
 

Offline m k

  • Super Contributor
  • ***
  • Posts: 2165
  • Country: fi
A quantum object can't exchange energy, if it does it stops being a quantum object.
Means that entanglement of quanta remains.

Before a probability wave collapse its energy has an equal possibility to be anywhere around a surface of the wave.
After a collapse the probability disappears completely and all the energy is where interaction happens and nowhere else.

The probability is in operation here, not the wave.
The wave is a specifying name, it's there only because probability has a shape of a wave of real world.

Quantization is a result of an experiment.
It just defines that there are steps, not that there are nothing between steps.
The nasty part is that if step is 2 it can't be reached with double 1.
Intensity doesn't do things here, so very intense 1 is still just 1.

OT, pretty much.
If infinite hotel is full, it still has infinite free rooms.
Advance-Aneng-Appa-AVO-Beckman-Danbridge-Data Tech-Fluke-General Radio-H. W. Sullivan-Heathkit-HP-Kaise-Kyoritsu-Leeds & Northrup-Mastech-REO-Simpson-Sinclair-Tektronix-Tokyo Rikosha-Topward-Triplett-YFE
(plus lesser brands from the work shop of the world)
 

Offline HuronKing

  • Regular Contributor
  • *
  • Posts: 237
  • Country: us
Thanks for your reply! I think we are mostly on the same page but I have a few comments I'd like to make.

...In the real universe, however, everything seem to be bounded in a way or another.  Infinite is not real.

That's not what infinity means in integration mathematically - nor is that how it's applied to physical problems.

In fact, this statement is actually nonsensical and easily disproved with a counterexample.

Can I not, as the 3rd video above shows, put a particle onto a circular path? Can I not roll that particle around the circular path in either direction, forever? That is, the possible number of rotations of the particle ARE unbounded because it is free to travel around the path once, twice, thrice.... up to infinite number of times, in either direction (clockwise or counterclockwise around the loop).

Yea. you might say "well the ball can't roll in a circle forever because the universe might actually end...* but that doesn't accurately describe the system of the ball. A function whose domain is negative infinity to positive infinity describes *everything* the ball could ever do. It's not that mysterious or universe-breaking.

Quote
That is why we always have to check if math results make sense, then have to validate them experimentally.

Absolutely - and that's what's wonderful about the mathematics of quantum mechanics is that it yields very testable predictions and in many cases led directly to the discovery of entirely new phenomena (the existence of positrons is something that just "fell out" out of Dirac's equations because solutions in the complex plane can have two solutions).

Quote
The puzzling property to me is not entanglement, but quantization.  Why something that is very small can only exist as a certain "chunk", called quanta?  This is a property that seems to be present in everything from the physical world, as long as it is very small.  I suspect it is related with the fact that infinity can not exist in the physical world.

You should watch the 3rd video I posted. It's actually *because* of infinity that matter must be quantized because the wave number has to be a whole positive integer to encompass every possible period of the function over the whole domain. The specific timestamp where this starts to be explained is here:
https://youtu.be/W8QZ-yxebFA?si=3giGjzlnnhNCs6A-&t=648


Quote
[Feynman] was kind of a jerk in real life.  Smart, yes, but arrogant even for his times, and unpleasant to work with.  Not my opinion, only repeating what I've seen on camera, what were saying other physicists that worked directly with Feynman.  He was not exactly the nice and charismatic character, as pictured in the YT interviews with Feynman talking about the beauty of a flower.

Hah, yea, I'm not suggesting Feynman was a particularly pleasant person by remarking on him - nor am I giving him sole credit for his contributions as obviously he shared the Nobel Prize for Path Integral Formulation.

All I mean to say is that my insight to this came from listening to him get annoyed with a student and emphasize that we have a world of particles whose behavior is probabilistic because you can't know their states to arbitrary precision (because of Fourier).

I personally don't like the term "wave-particle duality." I think it does more to confuse students than help them understand what is going on. Maybe it helps other people and that's great! But in my experience helping engineering students with this - they spend so much time slogging through Fourier stuff for signal processing, that it's amazing how quickly they grasp the 'obviousness' of quantum mechanics at more than an academic level - you start to see why quantum mechanics is the more fundamental law of nature. I like this video which explains why F = ma is a consequence of quantum mechanics:



Is light a particle or a wave? It's a particle - whose position and momentum is not determinant but probabilistic.
« Last Edit: May 21, 2024, 04:09:11 pm by HuronKing »
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf