Also one more thing I just noticed that they have working FPGA prototype, does this mean that their chip is at least working fine or what does it mean?
Also here link: https://www.hpcwire.com/off-the-wire/tachyum-sends-prodigy-chip-fpga-emulation-prototype-to-manufacturing/
It is a prototype so that they can develop software.
The X86-type chips run an instruction set that descended form the Intel 8085, and has had things added on for the last FORTY-plus years! First, moving it to 16 bits, then 32-bits, now to 64-bits. All slapped on without a clean re-design. It is a complete gargoyle! Then, the chips take the instructions apart and convert them so some kind of pseudo-code which is then processed as a bunch of micro-operations, as data and results flow through the processor. They have done amazing things with it, but it is really doing it the hard way.
So, I have no doubt at all that a clean-sheet design directed at improving efficiency and performance could make a huge difference. The holdback is that they want to keep it X86 compatible. Since all (new) code is now compiled in higher-level languages, backward compatibility is much less important. Just write the compiler back-end and recompile your code.
On the other hand, Silicon technology seems to have hit a wall almost 15 years ago. In 2005, we were moving up to 2.5-3 GHz CPU clocks, and we are now at 4 GHz. That's NOT a lot of progress in 15 years. Some guys I work with have been developing design methodology for designing GALS chips (globally asynchronous locally synchronous) chips, where major subsections would pass tokens around to keep processes in synch with data transfers. This gets around the issue of keeping billions of transistors all in total lock-step synch all the time.
Jon