EEVblog Electronics Community Forum
Products => Computers => Programming => Topic started by: SiliconWizard on August 24, 2022, 06:13:08 pm
-
https://github.com/carbon-language/carbon-lang
Yes. Yet another attempt from Google to come up with a C++ successor. Go and Dart were just appetizers.
This one is kind of a copycat of Rust from what I've seen so far, without the whole borrowing stuff and some other features, but otherwise it's very close, even syntax-wise.
Given that reducing our "carbon footprint" is all the rage lately, I guess the naming here is a bit unfortunate from a marketing POV. (Then again, one might say that the same applies to Rust. Yeah, good names are hard to find. Or maybe it's just intentional.)
So, what will be the next one after Carbon?
-
I really dislike how a lot of modern languages are designed for the parsers. What is this "fn" and "var" nonsense? It is not needed and just adds extra stuff to type.
Single return value in 2022 is a huge fail.
Other than this, it looks half baked at the moment. I'm sure google can force their engineers to use that for internal needs, I seriously doubt it would go anywhere for wider adoption.
-
Parsing is something that modern language designers have forgotten how to do. I took one look at Go and noped out of it as soon as I discovered that it enforces K&R bracketing "because that way we don't have to require semicolons," as a Go advocate put it.
:palm:
-
I wonder what is the reason for existence of that language. Other than building financial value of course. Project goals start with “having a code of conduct” — not exactly what I expect to be the primary motivation for a new language. The rest of the documents give no explanation either.
There is a lot of talk about giving the developer control over performance and low-level access. But the language seems to not address the issues both C and C++ have in that matter: operation ordering and timing. Unless the language is guaranteed to never optimize code and be a high-level assembler, but it doesn’t seem like it’s among its goals. Are they meaning having precisely defined numeric types? Java had those for 24 years,(1) both C and C++ have those if you want to enforce a particular environment, and TBH thinking about solving that kind of “horrible problems” was something I was concerned with in college, 2nd year.
As for being explicit with keywords, I do not see a problem. That seems like forcing programmers into parser’s perspective, sure. But keep in mind: as the syntax grows in complexity that will prevent ambiguities, which would lead to introducing weird constructs just to satisfy parser. In the end it’s likely to be constrained by the parser anyway: they may do it cleanly now or ducktape later. Having a parser-friendly syntax also helps development of new tools.
(1) Even longer, if we ignore floating point types.
-
I bet this is primarily for internal use. Having a reference compiler that you fully control allows for all sorts of interesting corporate stuff. In that case being close to C++ is a benefit. Being able to address a few issues here and there is nice, but you don't want to go wild, at least at first.
And it makes automated code converters easier to make. And they published those tools before any documentation.
-
I wonder what is the reason for existence of that language. Other than building financial value of course.
Existing modern languages already provide an excellent developer experience: Go, Swift, Kotlin, Rust, and many more. Developers that can use one of these existing languages should. Unfortunately, the designs of these languages present significant barriers to adoption and migration from C++. These barriers range from changes in the idiomatic design of software to performance overhead.
Carbon is fundamentally a successor language approach, rather than an attempt to incrementally evolve C++. It is designed around interoperability with C++ as well as large-scale adoption and migration for existing C++ codebases and developers. A successor language for C++ requires:
Performance matching C++, an essential property for our developers.
Seamless, bidirectional interoperability with C++, such that a library anywhere in an existing C++ stack can adopt Carbon without porting the rest.
A gentle learning curve with reasonable familiarity for C++ developers.
Comparable expressivity and support for existing software's design and architecture.
Scalable migration, with some level of source-to-source translation for idiomatic C++ code.
With this approach, we can build on top of C++'s existing ecosystem, and bring along existing investments, codebases, and developer populations. There are a few languages that have followed this model for other ecosystems, and Carbon aims to fill an analogous role for C++
Also,
Interoperate with your existing C++ code, from inheritance to templates
Fast and scalable builds that work with your existing C++ build systems
I don't know how far they will get with it, but can you name any other language which even attempts to address these concerns?
R**t, for example, is simply "rewrite everything and forget that C++ ever existed".
-
Enables automatic, opt-in type erasure and dynamic dispatch without a separate implementation. This can reduce the binary size and enables constructs like heterogeneous containers.
Somebody is finally starting to think, unlike certain earlier languages "better than C++" :clap:
-
So, what will be the next one after Carbon?
c, when they realise they have to get stuff done and not play around trying to get this weeks new language ;)
-
R**t, for example, is simply "rewrite everything and forget that C++ ever existed".
There's a lot of other ways to combine code than finegrain linking. For a language which tries to have a strictly memory "safe" subset, it's not really an option. Everything gets tainted, so what's the point?
A memory safe system programming language has a raison d'etre and can be justified at least some of the time, carbon to me doesn't seem to be worth the effort. Just use C or a C++ subset.
-
There's a lot of other ways to combine code than finegrain linking. For a language which tries to have a strictly memory "safe" subset, it's not really an option. Everything gets tainted, so what's the point?
The point is that they would spend an eternity on regression testing if they were to rewrite large codebases from ground up in a new language. And half an eternity plus a performance hit if they were to split them into mixed-language submodules with message passing, databases or whatever in between.
A memory safe system programming language has a raison d'etre and can be justified at least some of the time, carbon to me doesn't seem to be worth the effort. Just use C or a C++ subset.
Safer fundamentals, and an incremental path towards a memory-safe subset
BTW, I'm surely not a Google fanboy and neither a Carbon fanboy or anything like that.
But answers to most of the criticism here can be found right in their FAQ, so :-//
-
The alternative is still finding simple buffer overflows in their code even though they run every static analysis tool known to man in their build process.
Carbon doesn't have a safety at all now, it's not designed for it. Praying they can jury rig it in later without having to do a ton of recoding regardless is optimistic.
-
As I said, at this point, it doesn't look like a lot more than just a dumbed-down version of Rust, more approachable for the masses, even down to the syntax and maybe 75% of its features. ::)
-
I think that was already posted before, but here goes, pretty hilarious:
https://www.youtube.com/watch?v=vcFBwt1nu2U (https://www.youtube.com/watch?v=vcFBwt1nu2U)
And, if you want some rather "strange" content that may also trigger some laughter, but which is not meant to, you can watch some of the CppCon talks. Or even some of the Rust channels where people spend hours explaining how to solve relatively trivial problems with Rust. Pretty entertaining.
-
I really dislike how a lot of modern languages are designed for the parsers. What is this "fn" and "var" nonsense? It is not needed and just adds extra stuff to type.
I sincerely doubt that this has anything to do with the parser. As a compiler guy and language designer who has designed and written many parsers by hand, I can confidently say that writing parsers for this kind of stuff is a mere technicality. If your syntax is so ambiguous that writing the parser by hand becomes difficult, you can even generate it automatically.
A serious language designer shouldn't let a rather banal technicality like a parser dictate the language design. Of course, writing good parsers is a complex topic, but the reason for that is *not* that the actual parsing of the syntax is difficult but that good, meaningful *error recovery* is difficult.
The reasons for keywords like "fn" and "var" should stem from computer-scientific language design considerations. Of course, it's ok if you don't like them and would leave them out, but I would be pretty sure that they serve a purpose other than making the parser "simpler" (which it really does NOT become because the true reason for difficulty is, as I pointed out, error recovery and reporting).
Keep in mind that an "obvious simplification" (meaning a simplification that seems obvious at first glance) can be the exact opposite, namely a deep complication. In such cases, difficulties and "logical impedances" may arise at places where you never expected them. A programming language is an extremely complex formal system where all the syntactical parts must play along with BOTH semantical AND practical considerations.
-
I don't particularly mind those extra keywords at all. If anything, I find them a bit too shortened for no useful reason other than saving a few keystrokes, something that is one of my pet peeves. (Damn, buy a good keyboard and use a good editor, and stop whining.) Clear keywords punctuate source code much better than obfuscate syntax IMHO, no matter how used to it you are. Source code is for humans. Machines can swallow horrendous shit correctly as long as it's unambiguous. And yes, parsers are not rocket science. They are definitely a solved problem and have been for a long time now.
So, that's absolutely not the problem I have with those new languages (as I said, if anything I find the abbreviated keywords unreasonably short for no useful reason other than address the whining crowd's "It's too verbose, I won't use it. Ha!")
As I suggested, just have a look at the specifications of those languages - an honest, and deep enough look. And then go watch talks and learning videos and see for yourself what is wrong. That will eventually pop up.
-
There are probably only a few parsers in the world capable of understanding all of C++.
Try writing another one and then tell us it's not a big deal :P
-
There are probably only a few parsers in the world capable of understanding all of C++.
Try writing another one and then tell us it's not a big deal :P
We are talking about *parsers*, not compilers. While writing a C++ parser is certainly no small task, it is still not that big a problem. Writing a C++ compiler is a different story.
-
Here's a tip for a happy technology life: If Google made it, avoid it. They are like children with ADHD, and abandon things RAPIDLY.
-
I like playing with OS's ... doesn't matter whose, nor how obscure, etc. If I can get it to load and kick the tires a bit, it was both a success, and fun. While I have a few OS's that are daily drivers, the rest is just ... experimental and fun!
Must be folks like that in programming? Who want something new to kick the tires on ... They might use C++, Python, or whatnot to make a living, but at home, or on lunch breaks, *Carbon*-ize it ... that sounds fun (and green as a bonus)?
-
"Debating" (arguing) over which prog lang is "best" is like debating which shoes, clothes, food, movies are "best"; it's a subjective choice. They ALL compile down to binary, and the only sane human level above that is ASM... so learn ASM, for the education alone - even if you end up never using it - and then stack up on top of that.
-
"Debating" (arguing) over which prog lang is "best" is like debating which shoes, clothes, food, movies are "best"; it's a subjective choice.
No. As a computer scientist who has been developing both enterprise and some embedded software for 28 years in many different languages, I can only say that this has nothing to do with "subjective choice". There are very serious mathematical and practical differences between programming languages, mainly
* what abstractions they support,
* how far their support for formal verification goes, and
* what non-functional characteristics they have.
Telling people that choice of programming languages is "subjective" is not only grossly wrong but also irresponsible.
They ALL compile down to binary
That's completely and uttlerly non-factual. Honestly, please.
-
There are glaring problems in C++ so it is understandable to want to improve on it. But most languages that claim to be a successor to C++ fail to understand the strengths of the language. The D programming language is one rare exception. Go, Swift, Kotlin, Rust: yeah they have their own merits, but don't incorporate the advantages of C++. Just due to past attempts, I have my doubts that carbon will have learned to do it right, but maybe. It is always worth a try. There appears to be some promise at least on paper.
I like that C++ is an ISO standard, that there are multiple compilers for it, that it isn't owned by a company. That brings with it plenty of downsides but is fairly unique. It takes a massive amount of time and effort to get a language so this level.
-
"Debating" (arguing) over which prog lang is "best" is like debating which shoes, clothes, food, movies are "best"; it's a subjective choice.
No. As a computer scientist who has been developing both enterprise and some embedded software for 28 years in many different languages, I can only say that this has nothing to do with "subjective choice". There are very serious mathematical and practical differences between programming languages, mainly
* what abstractions they support,
* how far their support for formal verification goes, and
* what non-functional characteristics they have.
Telling people that choice of programming languages is "subjective" is not only grossly wrong but also irresponsible.
They ALL compile down to binary
That's completely and uttlerly non-factual. Honestly, please.
I agree, upon reflection, that the majority of that post was a little (or a lot) based on my inexperience, as I am not a programmer.
However, your last quotation - that makes no sense - do you think the silicon speaks ENGLISH? Processors talk IN BINARY - whether or not you have a known compilation process, or the interpreted language silently compiles the source on the fly, it ALL compiles down to binary; it can't NOT do so, otherwise how do you propose that the transistors on the die, work? NO other way except the base level fundamental mechanism of transistors arranged into logic gates.
-
The high level language design affects how efficient your final binary could be.
C does not care about array bounds checking, it will just access members outside the array. Really fast binary code though. Swift has array bounds checking. Each array access has to compile into a sequence of instructions that first check that the index is valid for a given array. Safe, but slow (relatively) code.
So, "everything is binary" is a true, but completely meaningless sentence.
This is why Rust talks a lot about "zero cost abstractions". They are purposefully designing high level language concepts keeping in mind how they would be implemented in the final binary code.
And if you really think that you can write assembly code that is as efficient as code generated by the compiler, then you really have no idea what you are talking about. Given low level architectural optimizations in the modern CPUs, it no longer possible to write sufficient chinks of code that would be faster than compiler generated code. There is just too much low level stuff to keep in mind and adjust for individual CPU architectures.
-
So, "everything is binary" is a true, but completely meaningless sentence.
It's only "meaningless" until you get to wanting the chip - the very thing that runs the show - to understand it. Silicon dies don't speak anything BUT binary, and even then they are not "speaking" it, it's merely voltages leaning against MOSFETS, causing them to change state when the correct combination is reached.
I am not meaning to appear pedantic, but they LITERALLY CANNOT function another way.
[EDIT]: Besides, for all this "digital" stuff, and our human arrogance of "advanced digital technology", it's STILL all analogue components strung together at a minute scale; there is no intrinsic "digital-ness" in physics, it is a human design, 0101 0101
-
CPUs don't "speak". But sure, believe whatever nonsense you want.
Also, I'm not really sure what your MOSFET musings contribute here at all. Go down to atoms, think bigger.
-
The high level language design affects how efficient your final binary could be.
C does not care about array bounds checking, it will just access members outside the array. Really fast binary code though. Swift has array bounds checking. Each array access has to compile into a sequence of instructions that first check that the index is valid for a given array. Safe, but slow (relatively) code.
So, "everything is binary" is a true, but completely meaningless sentence.
This is why Rust talks a lot about "zero cost abstractions". They are purposefully designing high level language concepts keeping in mind how they would be implemented in the final binary code.
And if you really think that you can write assembly code that is as efficient as code generated by the compiler, then you really have no idea what you are talking about. Given low level architectural optimizations in the modern CPUs, it no longer possible to write sufficient chinks of code that would be faster than compiler generated code. There is just too much low level stuff to keep in mind and adjust for individual CPU architectures.
In reference to your edits:
"And if you really think that you can write assembly code that is as efficient as code generated by the compiler, then you really have no idea what you are talking about."
I don't recall having said that, and didn't. The bottom line is, irrefutably, binary is the end product. That was my ONLY point. No ego or emotion involved here, just a fact. Nope, I cannot and would not write a compiler, let alone a better one :)
-
CPUs don't "speak". But sure, believe whatever nonsense you want.
Also, I'm not really sure what your MOSFET musings contribute here at all. Go down to atoms, think bigger.
I feel, based on our past interactions, you are intent on being "right" and starting an argument here.
I humbly ask you to READ AGAIN and not read through the "he's WRONG!!" filter - I explicitly clarified that silicon DOES NOT "speak" - we are talking about arrangements of logic gates, and the "combination" of the signals is what is "seen" as the one that causes said logic arrangement to execute its opcode by means of all parts of it waiting for said correct combination of "1" and "0" to be in alignment.
You clearly show great disdain for those you deign "below" you; I know you are a clever chap, so being intelligent, you will know that there is little to achieve from "proving" someone else is "wrong" and you are "right" - we may BOTH be wrong... so please, find another use of your time.
(https://cdn.someecards.com/someecards/usercards/not-now-honey-someone-on-the-internet-is-wrong-4cd6d.png)
-
I feel, based on our past interactions, you are intent on being "right" and starting an argument here.
I humbly ask you to READ AGAIN and not read through the "he's WRONG!!" filter - I explicitly clarified that silicon DOES NOT "speak" - we are talking about arrangements of logic gates, and the "combination" of the signals is what is "seen" as the one that causes said logic arrangement to execute its opcode by means of all parts of it waiting for said correct combination of "1" and "0" to be in alignment.
You clearly show great disdain for those you deign "below" you; I know you are a clever chap, so being intelligent, you will know that there is little to achieve from "proving" someone else is "wrong" and you are "right" - we may BOTH be wrong... so please, find another use of your time.
Eti, given your recent thread about problems you've been having, I'm going to have to recommend that you let this be and walk away from this thread.
-
I feel, based on our past interactions, you are intent on being "right" and starting an argument here.
I humbly ask you to READ AGAIN and not read through the "he's WRONG!!" filter - I explicitly clarified that silicon DOES NOT "speak" - we are talking about arrangements of logic gates, and the "combination" of the signals is what is "seen" as the one that causes said logic arrangement to execute its opcode by means of all parts of it waiting for said correct combination of "1" and "0" to be in alignment.
You clearly show great disdain for those you deign "below" you; I know you are a clever chap, so being intelligent, you will know that there is little to achieve from "proving" someone else is "wrong" and you are "right" - we may BOTH be wrong... so please, find another use of your time.
Eti, given your recent thread about problems you've been having, I'm going to have to recommend that you let this be and walk away from this thread.
Yep. I’ve no intention of allowing something so meaningless, eat my lunch or spoil my week. Cheers Dave :)
-
I agree, upon reflection, that the majority of that post was a little (or a lot) based on my inexperience, as I am not a programmer.
I respect your insight.
However, your last quotation - that makes no sense - do you think the silicon speaks ENGLISH? Processors talk IN BINARY - whether or not you have a known compilation process, or the interpreted language silently compiles the source on the fly, it ALL compiles down to binary; it can't NOT do so, otherwise how do you propose that the transistors on the die, work? NO other way except the base level fundamental mechanism of transistors arranged into logic gates.
You are right that processors "talk in binary", however you are making a logical short circuit when concluding that this means your code has to be compiled into CPU-specific machine language -- or even compiled at all:
* code can be compiled into other formats (i.e., CPU-independent bytecode);
* code can be interpreted.
You probably already know languages that are not used in conjunction with compilers, for instance many dialects of BASIC (introduced in the 1960s) and Python (introduced in 1989).
-
I agree, upon reflection, that the majority of that post was a little (or a lot) based on my inexperience, as I am not a programmer.
I respect your insight.
However, your last quotation - that makes no sense - do you think the silicon speaks ENGLISH? Processors talk IN BINARY - whether or not you have a known compilation process, or the interpreted language silently compiles the source on the fly, it ALL compiles down to binary; it can't NOT do so, otherwise how do you propose that the transistors on the die, work? NO other way except the base level fundamental mechanism of transistors arranged into logic gates.
You are right that processors "talk in binary", however you are making a logical short circuit when concluding that this means your code has to be compiled into CPU-specific machine language -- or even compiled at all:
* code can be compiled into other formats (i.e., CPU-independent bytecode);
* code can be interpreted.
You probably already know languages that are not used in conjunction with compilers, for instance many dialects of BASIC (introduced in the 1960s) and Python (introduced in 1989).
My point is, and it cannot be ANY other way, that even if not actively done by the programmer, SOMETHING HAS to translate/compile the bytecode or whatever INTO BINARY, whether done consciously as a step by Mr Programmer, or it being done automagically, and hidden from view. It all HAS to be binary by the time its the turn of the processor to process... in binary.
I am being pedantic, as pedantry in this case is factual precision. A silicon die literally cannot "speak" - it blindly executes various assortments of mix and match 0100101100010101 states, which is then decoded back into something for the system to act upon. Utter pedantry would be to point out (as another person did) about atoms - yeah - it's obvious atoms exist, ergo MOSFETS are formed from them, but my point is that, no matter WHAT does it, CPU = only binary code, so something, somewhere ALWAYS produces a binary stream FOR the CPU, and that is ALL it can process.
>>> http://www.differencebetween.net/technology/difference-between-bytecode-and-machine-code/ (http://www.differencebetween.net/technology/difference-between-bytecode-and-machine-code/)
-
I am being pedantic, as pedantry in this case is factual precision.
You are not at all being pedantic, you are just mixing up different categories that don't belong together. You are actually being imprecise because you don't clearly differentiate between some important concepts.
Let's remember what question we are discussing. Your claim was that it doesn't matter what language we use because they all have to translated to some sort of binary format. And THAT is not correct in two ways at the same time:
1. On an abstract level, there are many ways how languages may differ from each other that just have nothing to do with the question whether they are compiled or not. You already agreed to this point.
2. On a technical level, when there is an interpreter, the code is NOT translated into some binary form. It just doesn't happen. Let's REALLY be pedantic and precise here: The CPU is executing the interpreter's code, not the application's code. The CPU never sees the application's code. Only the interpreter sees that code, and in a fundamentally different way than the CPU would see it.
Item 2 enables very deep manipulations and tricks at runtime that are not possible with statically compiled code (well, depending on the runtime environment and the CPU). So, this distinction has not just theoretical consequences but also very relevant practical ones.
For instance, most of the time Python code is not translated into binary form. What the interpreter does is not the same as compiling into binary. If we want to be pedantic and precise, we have to acknowledge this fact. (Actually, we have to acknowledge it even we are not being pedantic.)
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
Now you are again in a different category. "Binary state", what's that supposed to mean? Honestly, all of this ist just so imprecise that it's difficult to reach any conclusion.
Of course you can transform any data in some kind of binary *representation*. I repeat: *representation*, ok? But that doesn't mean anything. In our context, the only meaningful thing is the CPU instruction set. And the code does not need to be mapped into this instruction set.
-
Fixpoint, don't waste your time, this is unfixable. As a "non-programmer" he has a very surface level understanding of how things work.
By that logic, source code itself is also "binary".
-
Fixpoint, don't waste your time, this is unfixable. As a "non-programmer" he has a very surface level understanding of how things work.
By that logic, source code itself is also "binary".
I don't need to be a programmer to understand the FUNDAMENTALS of ones and zeroes influencing arrangements of transistors as logic gates. Would you care to condescend to me a little more?
-
LOL.
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
Now you are again in a different category. "Binary state", what's that supposed to mean? Honestly, all of this ist just so imprecise that it's difficult to reach any conclusion.
Of course you can transform any data in some kind of binary *representation*. I repeat: *representation*, ok? But that doesn't mean anything. In our context, the only meaningful thing is the CPU instruction set. And the code does not need to be mapped into this instruction set.
Tell me, old bean, how are instructions presented to the CPU? Yep, binary. My initial and current point is that the end result of ALL the layers is that a CPU *ONLY* understands binary code, and the *WHATEVER* you want to call it, IS ALWAYS converted to binary - the way one talks to the CPU, and unless you have some (currently in ubiquitous use) magical CPU which doesn't use binary, tell me, what else IS there?
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
Now you are again in a different category. "Binary state", what's that supposed to mean? Honestly, all of this ist just so imprecise that it's difficult to reach any conclusion.
Of course you can transform any data in some kind of binary *representation*. I repeat: *representation*, ok? But that doesn't mean anything. In our context, the only meaningful thing is the CPU instruction set. And the code does not need to be mapped into this instruction set.
Tell me, old bean, how are instructions presented to the CPU? Yep, binary. My initial and current point is that the end result of ALL the layers is that a CPU *ONLY* understands binary code, and the *WHATEVER* you want to call it, IS ALWAYS converted to binary - the way one talks to the CPU, and unless you have some (currently in ubiquitous use) magical CPU which doesn't use binary, tell me, what else IS there?
Well, the thing is, I already answered exactly those questions. At this point, I really can only repeat myself. I try a last time with even more detail:
1. The CPU executes instructions in a binary format called its machine language. The whole of those machine codes is its instruction set.
2. The program that is executed by the CPU is, in our case, an interpreter.
3. The application program is not executed by the CPU. It can't be because it was never compiled to machine code, and that's the only thing the CPU understands.
4. The application program is, on a technical level, DATA, not CODE, and is executed by the interpreter, which in turn is executed by the CPU.
5. A classic interpreter (like the original BASIC and PHP interpreters) operates directly on the source code. There is no intermediate byte code compilation involved.
6. Of course, for performance reasons source code can be compiled into byte code (which, mind you, has nothing to with the CPU's instruction set!) which is then executed by an interpreter called a virtual machine.
So, let me say it a last time: When working with interpreters, the source code is NOT translated to any kind of binary format. It is only REPRESENTED in some binary form, like ASCII or whatever. But this has nothing to do with the CPU because we are talking about DATA here while the CPU only understands CODE (machine instructions).
In order to get these things right, you have to differentiate between fundamental concepts like syntax vs semantics, data vs representation of data, data vs code, compilation vs interpretation. If you mix all these things up, you can end up with statements like "everything is the same because in the end there are only 1s and 0s". Well, yes, in the end there are only 1s and 0s, but that DOES NOT MEAN EVERYTHING IS THE SAME. That's just a logical short.
PS: The screenshot you showed contains false statements. It claims that "the virtual machine converts bytecode into specific machine instructions". That is awful nonsense. Honestly, I get very annoyed when I see such incompetence. The people who wrote that obviously don't know what an interpreter is, what a VM is, and what a JIT compiler is, or they just don't care for correct language. Just awful.
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
Now you are again in a different category. "Binary state", what's that supposed to mean? Honestly, all of this ist just so imprecise that it's difficult to reach any conclusion.
Of course you can transform any data in some kind of binary *representation*. I repeat: *representation*, ok? But that doesn't mean anything. In our context, the only meaningful thing is the CPU instruction set. And the code does not need to be mapped into this instruction set.
Tell me, old bean, how are instructions presented to the CPU? Yep, binary. My initial and current point is that the end result of ALL the layers is that a CPU *ONLY* understands binary code, and the *WHATEVER* you want to call it, IS ALWAYS converted to binary - the way one talks to the CPU, and unless you have some (currently in ubiquitous use) magical CPU which doesn't use binary, tell me, what else IS there?
Well, the thing is, I already answered exactly those questions. At this point, I really can only repeat myself. I try a last time with even more detail:
1. The CPU executes instructions in a binary format called its machine language. The whole of those machine codes is its instruction set.
2. The program that is executed by the CPU is, in our case, an interpreter.
3. The application program is not executed by the CPU. It can't be because it was never compiled to machine code, and that's the only thing the CPU understands.
4. The application program is, on a technical level, DATA, not CODE, and is executed by the interpreter, which in turn is executed by the CPU.
5. A classic interpreter (like the original BASIC and PHP interpreters) operates directly on the source code. There is no intermediate byte code compilation involved.
6. Of course, for performance reasons source code can be compiled into byte code (which, mind you, has nothing to with the CPU's instruction set!) which is then executed by an interpreter called a virtual machine.
So, let me say it a last time: When working with interpreters, the source code is NOT translated to any kind of binary format. It is only REPRESENTED in some binary form, like ASCII or whatever. But this has nothing to do with the CPU because we are talking about DATA here while the CPU only understands CODE (machine instructions).
In order to get these things right, you have to differentiate between fundamental concepts like syntax vs semantics, data vs representation of data, data vs code, compilation vs interpretation. If you mix all these things up, you can end up with statements like "everything is the same because in the end there are only 1s and 0s". Well, yes, in the end there are only 1s and 0s, but that DOES NOT MEAN EVERYTHING IS THE SAME. That's just a logical short.
PS: The screenshot you showed contains false statements. It claims that "the virtual machine converts bytecode into specific machine instructions". That is awful nonsense. Honestly, I get very annoyed when I see such incompetence. The people who wrote that obviously don't know what an interpreter is, what a VM is, and what a JIT compiler is. Just awful.
*** FINALLY. That is the ONLY thing I am talking about - T___H___E________O___N____L____Y______T___H___I___N___G
I maybe worded it ambiguously, but my initial point, a few posts ago (again, probably I worded it poorly) is that, at the end of the day, and at the BARE METAL, all that matters to the CPU is seeing pins high and low, aka binary.
Sorry to confuse everyone - I am bailing from this thread, hah. I am sure I might have misunderstood some of this, and I apologise profusely if I appeared, or was arrogant.
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
I don't get what you are trying to say though.
Because if you are actually saying that programming languages don't matter, because they all end up as binary code anyway, you are clearly missing the whole point of programming.
-
Again, ALL the CPU can understand is binary. No amount of "debates" or refutation will change that. Even as a non-programmer I know that. It literally can't NOT be so.
No matter how the code written ends up in that binary state, it still does.
I don't get what you are trying to say though.
Because if you are actually saying that programming languages don't matter, because they all end up as binary code anyway, you are clearly missing the whole point of programming.
Yeah I cocked it right up, the explanation. I think we might abandon it, put it down to human failure... I hope!