Author Topic: EEVblog #858 - Red Pitaya  (Read 25378 times)

0 Members and 1 Guest are viewing this topic.

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 4814
  • Country: gb
Re: EEVblog #858 - Red Pitaya
« Reply #50 on: March 12, 2016, 07:52:53 am »
I realise I'm somewhat of a Luddite, but I've yet to see any real world evidence, as opposed to contrived examples and a very few edge cases, that the profile based adaptive optimisation provides any gains in an overall sense compared to static optimisations.

We've had similar techniques in the database world for a couple of decades now, the main problem with these techniques is lack of determinism which is a serious frustration when troubleshooting particularly non-functional facets of a system. All that code you diligently tested is now doing something different, but not all the time. I'd much rather have a consistently slowly performing system to troubleshoot I can reproduce than one that sometimes randomly starts running ten or a hundred or a thousand times slower.

My point is that something you non-functionally tested in lower environments is more likely to behave differently in higher environments because it's virtually impossible to come up with a practical test matrix.
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 10704
  • Country: gb
    • Having fun doing more, with less
Re: EEVblog #858 - Red Pitaya
« Reply #51 on: March 12, 2016, 09:14:22 am »
I realise I'm somewhat of a Luddite, but I've yet to see any real world evidence, as opposed to contrived examples and a very few edge cases, that the profile based adaptive optimisation provides any gains in an overall sense compared to static optimisations.

That would be an extraordinarily difficult thing to do with a real-life application - how can you meanigfully compare, say, a Java application with a C application? In all such cases the quality of the libraries and application, the development timescale and availability of staff would totally dominate any results.

However, the key point about the HP Labs report, http://www.hpl.hp.com/techreports/1999/HPL-1999-78.html is that it unambiguously shows the surprising power of HotSpot-class techniques. It destroys the ignorant prejudice that "Java has to be slow because it is interpreted".

I say "surprising" because they weren't looking for such a significant result. Some commentators have even described it as "accidentally becoming a viable C compilation technique", but I think that is overblown!

I've personally witnessed key portions of  an application-server based code become 3 times faster as Hotspot kicked in. BTW, as you know, realistically it has to be server based, not embedded!

Quote
We've had similar techniques in the database world for a couple of decades now, the main problem with these techniques is lack of determinism which is a serious frustration when troubleshooting particularly non-functional facets of a system. All that code you diligently tested is now doing something different, but not all the time. I'd much rather have a consistently slowly performing system to troubleshoot I can reproduce than one that sometimes randomly starts running ten or a hundred or a thousand times slower.

I'd argue that your system architecture's function must be insensitive to detailed timing delays, since that's what will occur in production. Of course, when benchmarking performance, it is vital to "warm up" a HotSpot JVM before doing the performance test, but that is neither difficult nor lengthy.

Quote
My point is that something you non-functionally tested in lower environments is more likely to behave differently in higher environments because it's virtually impossible to come up with a practical test matrix.

In that case the system architecture and implementation is badly defective, unless you restrict "differently" to mean "slower but still correct".
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 

Offline rsjsouza

  • Super Contributor
  • ***
  • Posts: 3620
  • Country: us
  • Eternally curious
    • Vbe - vídeo blog eletrônico
Re: EEVblog #858 - Red Pitaya
« Reply #52 on: March 12, 2016, 01:08:25 pm »
It seems they have a business model focused on the Hardware and accessories (with some third party contribution), therefore saying that a closed hardware detracts from the product is way too narrow. One can argue one way or the other, but IMO the effort put into such compact DAQ must be rewarded in some way.

Also, developing a polished GUI is not an easy task, but the large number of drop-down menus, the large buttons, the apparent empty areas (for a desktop browser at least) and the "wheel" at the bottom are a clear giveaway they were trying to cater tablets or modern notebooks and their touchscreens.

I agree with others regarding the Wifi issues: it is really difficult to address all different configurations and their pitfalls, and I have the impression your router is getting in the way. This is traditionally addressed by FAQs and documentation, but it is not easy to do in a newly released product (which I think this is, but maybe I am wrong).

Interesting to see the design decision to go for a dual A9 core integrated into a FPGA - perhaps the compactness was a very important factor. In other times I would have never expected to see this combination simply due to the power density.

All that said, IMO the issues shown in the video (and corroborated by others) make the product unusable as a DAQ.
Vbe - vídeo blog eletrônico http://videos.vbeletronico.com

Oh, the "whys" of the datasheets... The information is there not to be an axiomatic truth, but instead each speck of data must be slowly inhaled while carefully performing a deep search inside oneself to find the true metaphysical sense...
 

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 4814
  • Country: gb
Re: EEVblog #858 - Red Pitaya
« Reply #53 on: March 12, 2016, 01:26:01 pm »
I realise I'm somewhat of a Luddite, but I've yet to see any real world evidence, as opposed to contrived examples and a very few edge cases, that the profile based adaptive optimisation provides any gains in an overall sense compared to static optimisations.

That would be an extraordinarily difficult thing to do with a real-life application - how can you meanigfully compare, say, a Java application with a C application? In all such cases the quality of the libraries and application, the development timescale and availability of staff would totally dominate any results.

That's really part of my point.

Quote
Quote
We've had similar techniques in the database world for a couple of decades now, the main problem with these techniques is lack of determinism which is a serious frustration when troubleshooting particularly non-functional facets of a system. All that code you diligently tested is now doing something different, but not all the time. I'd much rather have a consistently slowly performing system to troubleshoot I can reproduce than one that sometimes randomly starts running ten or a hundred or a thousand times slower.

I'd argue that your system architecture's function must be insensitive to detailed timing delays, since that's what will occur in production. Of course, when benchmarking performance, it is vital to "warm up" a HotSpot JVM before doing the performance test, but that is neither difficult nor lengthy.

No I'm talking predominently about non-functional aspects, in particular systems running fast one minute and slow the next, frequently by orders of magnitude. One of the key facets in troubleshooting is having a consistently reproducible scenario. The problem is that stochastic methods like this bring indeterministic results (in terms of performance) that are frequently difficult to reproduce in a controlled environment. Also bear in mind that much of the time it is not even possible to test with real data in certain domains such as financial or other areas that include personally identifiable information.

Quote
Quote
My point is that something you non-functionally tested in lower environments is more likely to behave differently in higher environments because it's virtually impossible to come up with a practical test matrix.

In that case the system architecture and implementation is badly defective, unless you restrict "differently" to mean "slower but still correct".

Yes, that is why I said "non-functional". I am sure we've all been on the end of a phone to a call centre when they say "the system's running slow today", or when something unexpectedly takes longer than usual. As an end user, I'd much prefer to have something that consistently runs in, say, 5 seconds than something that runs in 1 second 95% of the time and a minute 5% of the time for example, even though the aggregated time in the latter case is less.

Anyway I'm now waaay off topic!
 

Offline Howardlong

  • Super Contributor
  • ***
  • Posts: 4814
  • Country: gb
Re: EEVblog #858 - Red Pitaya
« Reply #54 on: March 12, 2016, 01:36:33 pm »
I am not sure what I did, but I spent three hours of my life today trying to get the WiFi to work. I did succeed, but in trying to document the steps, I ended up back at square one.

One of the confusing things I am sure is that after making certain changes to the SD card, such as adding the wpa_supplicant.conf, you need to reboot the RP a second time after the change. I am pretty certain I did this, as documented, and it didn't work, but having polished up my "penguin skills" (I love that!) and randomly pushing buttons and reading through dozens of mostly half relevant and out-of-date topics online, I got it to work.

In documenting my changes, I went back to a blank SD card, followed the instructions without my changes first, and it started working. so rather frustratingly now I have no clue what I did to get it to work.

Basically, though, I'll say this...

o Note that by default, the RP tries to work as an AP rather than as a WiFi client which confuses things
o After adding the wpa_supplicant.conf to the SD card from your main computer, don't forget to reboot twice.

 

Offline vodka

  • Frequent Contributor
  • **
  • Posts: 456
  • Country: es
Re: EEVblog #858 - Red Pitaya
« Reply #55 on: March 12, 2016, 07:08:10 pm »
Quote
It destroys the ignorant prejudice that "Java has to be slow because it is interpreted". In other words, with Hotspot type techniques, an emulated processor+program is faster than the real processor+program.
.

Simply is ilogical ,because when you emulated a processor always you need more RAM and ROM space furthermore with the the program loaded that is impossible that go more fastest than an original processor with the compilated program. Unless ,the computer that contains the emulated processor have more resources in RAM , ROM and the computation power . Then that is posible,but isn't fair play.

When i emulated Nintendo64 game in a AMDK6 went bad, even so with the updated graphic card,i had turned  off the sound because only heard beeps.

The Hotspot JVM programmed C++ and above Oriented to Object (So that God catch us  confessed ) :palm:
 

Online tggzzz

  • Super Contributor
  • ***
  • Posts: 10704
  • Country: gb
    • Having fun doing more, with less
Re: EEVblog #858 - Red Pitaya
« Reply #56 on: March 12, 2016, 09:02:48 pm »
Quote
It destroys the ignorant prejudice that "Java has to be slow because it is interpreted". In other words, with Hotspot type techniques, an emulated processor+program is faster than the real processor+program.
.

Simply is ilogical ,because when you emulated a processor always you need more RAM and ROM space furthermore with the the program loaded that is impossible that go more fastest than an original processor with the compilated program. Unless ,the computer that contains the emulated processor have more resources in RAM , ROM and the computation power . Then that is posible,but isn't fair play.

No. Read the HP Dynamo report before you repeat incorrect statements. Then you will be surprised, and finally you will understand your errors.

The subject under discussion is speed, not memory usage. There was sufficient RAM for all programs; ROM is irrelevant to this discussion. The same PA-RISC processor and operating system (HP-UX) was used throughout.

From the abstract: "the performance of many +O2 optimized SPECint95 binaries running under Dynamo [i.e. emulated] is comparable to the performance of their +O4 optimized version running without Dynamo.".  Think about that: the more optimised code running normally sometimes performed worse than the less optimised code being emulated. Static compilation of C code simply cannot be very efficient due to C language features (especially the possibility of aliasing). And if you don't believe/understand that, then you should discuss it with the High Performance Computing crowd that have been pushing machine performance professionally for half a century.

Quote
When i emulated Nintendo64 game in a AMDK6 went bad, even so with the updated graphic card,i had turned  off the sound because only heard beeps.

OK; that shows you don't know how to write emulators. Big deal. Fortunately other people do know how to write emulators.
There are lies, damned lies, statistics - and ADC/DAC specs.
Glider pilot's aphorism: "there is no substitute for span". Retort: "There is a substitute: skill+imagination. But you can buy span".
Having fun doing more, with less
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf