Author Topic: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion  (Read 8571 times)

0 Members and 1 Guest are viewing this topic.

Online SiliconWizard

  • Super Contributor
  • ***
  • Posts: 15198
  • Country: fr
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #50 on: October 13, 2020, 05:50:04 pm »
I didn't know about this device yet. That confirms Intel's interest. This just looks like a first step though IMO: it's apparently just an existing FPGA die coupled to a CPU die. I expect something more integrated in the future, like FPGA fabric possibly directly on the same die and possibly more "tightly coupled".
It's more efficient to use separate dies on interposer, because you are not limited to using the same process for all subcomponents, and yields are generally better for smaller dies. And AMD's success in recent years has proven that SiP designs (which is what all their modern CPUs are) are a way to go.

It's more efficient in certain contexts. This is not a general recipe.

1. If both elements can't be implemented on the same process (for technical or cost reasons), you just don't have a choice.
2. For multi-core CPUs, that makes sense. You can develop dies with 4 or 8 cores each, and then interconnect several dies on interposer to make CPUs with a lot more cores, giving you a lot of flexibility. In somes cases it also helps with thermal management. It obviously also helps with yield. Throwing away a 32-core on a single huge die is way more expensive that throwing away a 4-core die. Typical interconnections for multi-core designs also lend themselves well to this.

OTOH, with separate dies, you're obviously limited in how much of the internals you are able to expose to die pads.
So integrating some FPGA (or we may more generally say, reconfigurable logic) deeply on the same die would certainly allow tighter integration. You could get access to a core's internals in ways impossible just through the external connections. I can think of a few interesting applications for this that would make the cores themselves reconfigurable to some degree, whereas just interconnecting an FPGA die to a CPU die makes it possible to use it as some kind of coprocessor (which already allows lots of nice stuff), but nothing else much.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2782
  • Country: ca
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #51 on: October 13, 2020, 06:21:54 pm »
So integrating some FPGA (or we may more generally say, reconfigurable logic) deeply on the same die would certainly allow tighter integration. You could get access to a core's internals in ways impossible just through the external connections. I can think of a few interesting applications for this that would make the cores themselves reconfigurable to some degree, whereas just interconnecting an FPGA die to a CPU die makes it possible to use it as some kind of coprocessor (which already allows lots of nice stuff), but nothing else much.
You will destroy all CPU performance going this way. There is a reason you can only get into 1GHz+ logic frequency with hard silicon. Unless by "reconfigurable" you mean few basic settings, but for that you don't need any PFGA, just a register with some config bits.
Having thousands of interconnect lines between dies on interposer is not a problem - look at HBM for example. And then there is a 3D stacking ("die-on-die"), Intel is pursuing this approach with their Foveros packaging, which allows even lower delays due to shorter lines. Xilinx has been using this approach in their SSI (stacked silicon interconnect) devices for a while now.
The time of giant monolithic dies is coming to an end. This generation of GPUs is probably the last one (or one-before-last) that uses a monolithic big-ass die.

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7874
  • Country: nl
  • Current job: ATEX product design
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #52 on: October 13, 2020, 07:02:48 pm »
https://seekingalpha.com/article/4378735-amd-and-xilinx-prize-is-versal-acap-not-fpgas

found this article: really good read.

Basically it says that AMD's real motivation for the 30Billion comes from Xilinx'x ACAP. The article states that the Xilinx core FPGA market has only risen 4.8% since 2013, which isnt worth the 30 billion dolllars.

It also says, despite the fact that FPGAs will replace GPUs for AI acceleration, "that market is still small but with strong growth potential," also not making the deal worth it to AMD (in terms of hardware acceleration).

The ACAP however is targeted towards the fastest growing and "next generation" semiconductor markets such as 5G, defense, autonomous driving assist, etc, which is why the article says AMD wants Xilinx.

The author guesses that Xilinx will reject the offer, due to AMD not having much to offer them.

Any thoughts?
They are making a huge mistake (30B to be exact). They bought ATI, hoping they will integrate the graphic cards into the CPU, and it is going to be stronger than everything and kumbaya. And they failed. They failed so hard, that it led to a decade of Intel dominance on the CPU market. Now they have a strong 2 years, and they are going to buy another company, to ruin their financials and to chase pipe dreams. What is wrong with them?
Google developed the TPU. It is an AI accelerator, very fast. Faster than an  FPGA, because it is custom built to do only that. Amazon has  their own AI accelerator chips.
FPGAs are not really good at doing anything. They are just good enough of doing many things. If there is money to be made somewhere, they will make an ASIC for it.
And if you just want to place a little bit of FPGA in your CPUs, then license the damn thing. You can do that, you dont need the entire company for that.
 
The following users thanked this post: SilverSolder, BrianHG

Online coppice

  • Super Contributor
  • ***
  • Posts: 9307
  • Country: gb
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #53 on: October 13, 2020, 07:14:24 pm »
Google developed the TPU. It is an AI accelerator, very fast. Faster than an  FPGA, because it is custom built to do only that. Amazon has  their own AI accelerator chips.
Yep, those chips, and a few other's like Tesla's, seem to be where the action is right now with AI and ML acceleration. They aren't making silicon for the merchant market, though. Someone needs to.

FPGAs are not really good at doing anything. They are just good enough of doing many things. If there is money to be made somewhere, they will make an ASIC for it.
Actually FPGAs are really good at getting advanced DSP solutions into the market place. That's why cellular has been such an important market for them. Eventually, as specs stabilise, ASICs are typically made for each generation of cellular system. However, anyone committing to an ASIC too early ends up with useless junk as the specs drift. Its complex FPGAs that get each new generation off the ground.
 
The following users thanked this post: SilverSolder

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5022
  • Country: si
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #54 on: October 14, 2020, 04:22:21 am »
So integrating some FPGA (or we may more generally say, reconfigurable logic) deeply on the same die would certainly allow tighter integration. You could get access to a core's internals in ways impossible just through the external connections. I can think of a few interesting applications for this that would make the cores themselves reconfigurable to some degree, whereas just interconnecting an FPGA die to a CPU die makes it possible to use it as some kind of coprocessor (which already allows lots of nice stuff), but nothing else much.
You will destroy all CPU performance going this way. There is a reason you can only get into 1GHz+ logic frequency with hard silicon. Unless by "reconfigurable" you mean few basic settings, but for that you don't need any PFGA, just a register with some config bits.
Having thousands of interconnect lines between dies on interposer is not a problem - look at HBM for example. And then there is a 3D stacking ("die-on-die"), Intel is pursuing this approach with their Foveros packaging, which allows even lower delays due to shorter lines. Xilinx has been using this approach in their SSI (stacked silicon interconnect) devices for a while now.
The time of giant monolithic dies is coming to an end. This generation of GPUs is probably the last one (or one-before-last) that uses a monolithic big-ass die.

They are actually already reconfigurable.

Modern Intel CPUs come with about 1 to 16 KB of microcode that is sort of the "firmware" for it. It does the job of configuring some of the larger CPU components and tells it how exactly to decode the complex x86 instructions into multiple smaller instructions that actually get executed. During manufacturing they can also load in a special selftest microcode that helps run tests.

They are making a huge mistake (30B to be exact). They bought ATI, hoping they will integrate the graphic cards into the CPU, and it is going to be stronger than everything and kumbaya. And they failed. They failed so hard, that it led to a decade of Intel dominance on the CPU market. Now they have a strong 2 years, and they are going to buy another company, to ruin their financials and to chase pipe dreams. What is wrong with them?
Google developed the TPU. It is an AI accelerator, very fast. Faster than an  FPGA, because it is custom built to do only that. Amazon has  their own AI accelerator chips.
FPGAs are not really good at doing anything. They are just good enough of doing many things. If there is money to be made somewhere, they will make an ASIC for it.
And if you just want to place a little bit of FPGA in your CPUs, then license the damn thing. You can do that, you dont need the entire company for that.

I would not call it a mistake of buying ATI.

It was a perfectly good product back when AMD chips still held on against Intel the APU offering of AMD Core+Radeon graphics on the same die. It was capable of offering decent gaming performance at a fraction of the price. The GPU core could simply reuse all the support circuitry already there for the CPU, so things like the Vcore supply and RAM. The extra cost of the GPU silicon area was not that high and since they owned ATI there was no royalties or supplier monopolies to contend with. This did result in beefyer Vcore required, more cooling and faster RAM to feed it all, but all of this was still significantly less added cost than a dedicated graphics card. So for a span of a few years this was the best bang for buck if you wanted a cost effective gaming PC.

The same APU chips also ended up being used in previous generation gaming consoles as the Xbox One (50 milion sold) and PS4 (110 milion sold) making them a large success. Fast forward to today the new console generation of Xbox and PS5 are also using AMDs offering of a Ryzen CPU + Radeon GPU on a single chip.
« Last Edit: October 14, 2020, 04:24:19 am by Berni »
 

Offline BrianHG

  • Super Contributor
  • ***
  • Posts: 8031
  • Country: ca
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #55 on: October 14, 2020, 04:43:34 am »
https://seekingalpha.com/article/4378735-amd-and-xilinx-prize-is-versal-acap-not-fpgas

found this article: really good read.

Basically it says that AMD's real motivation for the 30Billion comes from Xilinx'x ACAP. The article states that the Xilinx core FPGA market has only risen 4.8% since 2013, which isnt worth the 30 billion dolllars.

It also says, despite the fact that FPGAs will replace GPUs for AI acceleration, "that market is still small but with strong growth potential," also not making the deal worth it to AMD (in terms of hardware acceleration).

The ACAP however is targeted towards the fastest growing and "next generation" semiconductor markets such as 5G, defense, autonomous driving assist, etc, which is why the article says AMD wants Xilinx.

The author guesses that Xilinx will reject the offer, due to AMD not having much to offer them.

Any thoughts?
They are making a huge mistake (30B to be exact). They bought ATI, hoping they will integrate the graphic cards into the CPU, and it is going to be stronger than everything and kumbaya. And they failed. They failed so hard, that it led to a decade of Intel dominance on the CPU market. Now they have a strong 2 years, and they are going to buy another company, to ruin their financials and to chase pipe dreams. What is wrong with them?
Google developed the TPU. It is an AI accelerator, very fast. Faster than an  FPGA, because it is custom built to do only that. Amazon has  their own AI accelerator chips.
FPGAs are not really good at doing anything. They are just good enough of doing many things. If there is money to be made somewhere, they will make an ASIC for it.
And if you just want to place a little bit of FPGA in your CPUs, then license the damn thing. You can do that, you dont need the entire company for that.
It gets worse, you do not need to even license any FPGA unless you want a complete fabric.  The degree of which AMD already has a chunk of firmware gate programable re-configurability already built into their GPUs and CPUs.  They do not need more.  It is standard in such large designs to allow for bug work-arounds once discovered in the field so entire dies will not need re-designing.

On the other hand, 1 reason alone exists at this level.  AMD may be already licensing some core parts of their CPUs and GPUs from third party vendors.  (IE PCIX/USB/anything other)  If Xilinx has a suitable replacement core within their IP, nothing to do with the main FPGA fabric, say just the die transistor layout on the serdes and PLL on the IO pins, a purchase may work out in the books as a profit to switch over.  In this case, Xilinx has a chance of no longer advancing as AMD just got what they want to make their CPUs and GPU more profitable.
« Last Edit: October 14, 2020, 04:47:28 am by BrianHG »
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #56 on: October 14, 2020, 06:59:53 am »
The reason they MUST do something about FPGAs is that they are just about starting to be looked at seriously for the mainstream(ish) data enter market, where the new trend is to add as much smarts as possible to the NIC in order to offload as much as possible from high speed networking (100G and up), as these speeds put a significant load on the CPU, and I am not only talking about network stack level stuff but also potentially layer 7 processing. Here having a real FPGA inside the cpu would be a godsend.

You must remember that while the desktop segment (and gamers in particular) are very vocal, they represent a vanishingly small part of the company revenue, that is dominated by server chips (this is also true for intel and nvidia, not just AMD). So anything that gives you market share there is good

Also more and more compute heavy applications are moved to GPUs, making CPUs less and less relevant by the day, and in that segment nVidia has a chokehold on the market with CUDA, there is many times the CUDA software with respect to openCL, so to  gain market share there AMD would need a GPU with drastically higher performance with respect to nvidia, also keeping in mind that the openCL software ecosystem (third party tooling, libraries, etc) is nearly non existent.
Oppose this to FPGAs where Xilinx is actually the market leader both in term of market share, and especially in term of technology, on the high level synthesis, software on FPGA, field.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5022
  • Country: si
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #57 on: October 14, 2020, 08:25:01 am »
Well FPGAs are not quite the magic bullet.

Sure you can massively accelerate certain computationally light but data heavy algorithms, but writing software for FPGAs is still a pain that nobody outside of the digital electronics field wants to deal with. And the efficency of FPGA solutions is not that great, both in power and clockspeed. Yes i know there are fancy compilers that turn C code into HDL but you still have to keep the FPGAs internal workings in mind to be able to write fast and efficient code.

I personally think that the next step is a more I/O capable GPU-like architecture. Today GPUs with compute are taking over all the boring repetitive math by acting as a sort of miniature supercomputer of 1000s of cores that munch away at the data in parallel. They execute a small program much like a CPU and are programed in a similar way with the usual parallelism issues. Only issue they have is that data needs to be fed to them into memory before they can do anything with it.

You could just as well do something similar on say a network card. Give it 100s of tiny processing cores that are suited for the networking tasks, but have the architecture be designed around fast IO too. Then software can be loaded into the cores to do some rough first pass processing on the incoming network data. The functionality of each core can be quickly swapped out on the fly if suddenly the network traffic has more of one type of data packet, so more cores are needed towards that. The computing resources can easily be shared and split between multiple applications running on the host OS.
 

Offline brucehoult

  • Super Contributor
  • ***
  • Posts: 4421
  • Country: nz
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #58 on: October 14, 2020, 09:51:37 am »
I personally think that the next step is a more I/O capable GPU-like architecture. Today GPUs with compute are taking over all the boring repetitive math by acting as a sort of miniature supercomputer of 1000s of cores that munch away at the data in parallel. They execute a small program much like a CPU and are programed in a similar way with the usual parallelism issues. Only issue they have is that data needs to be fed to them into memory before they can do anything with it.

And that's exactly the problem with GPUs.

Another big problem with GPUs is algorithms that alternate parallel and serial parts. Their serial execution speed sucks, and communication turn around latency and bandwidth between CPU and GPU sucks. This causes Amdahl's Law problems with the maximum speedup possible from parallelization.

Modern vector ISAs avoid both these problems. If you design them carefully (e.g. RISC-V Vector extension) then it's easy to compile GPU languages such as CUDA and OpenCL to them and run with high efficiency. When it makes sense you can put thousands of execution elements in your Vector unit, but exactly the same code will run perfectly on smaller Vector units.

ARM SVE is much the same. I haven't verified whether it has the correct things to handle CUDA-style programs (with things such as divergence and reconvergence). It's also a bit limited in terms of the number of execution elements possible, as it is limited to vector registers between 128 bits and 4096 bits in length -- that's only 128 32 bit ints or floats maximum. RVV's limit on vector register length is 2^32 (or maybe 2^31) and some people are seriously planning implementations with 4k or 16k or 64k processing elements.

You can also interleave parallel and serial processing at the level of individual instructions, and the vector unit can operate from the same L1 or L2 cache as the scalar CPU.
 

Offline Berni

  • Super Contributor
  • ***
  • Posts: 5022
  • Country: si
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #59 on: October 14, 2020, 10:56:35 am »
Nvidia tho is pushing this Magnum IO and GPUDirect thing lately. Where the GPU is supposed to directly talk to fast SSD storage and networking to make getting data in and out faster and less CPU intensive.

No idea how that actually works under the hood. They are pushing this tech into the general consumer market as competition to the next gen game consoles that will use similar functionality to load and decompress 3D models and textures from SSD directly into video RAM at ridiculously fast speeds.

Does that tech actually deliver on the promise? Since Nvidia has loved to rush out not yet ready tech out into the market just to be there first.
 

Offline asmi

  • Super Contributor
  • ***
  • Posts: 2782
  • Country: ca
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #60 on: October 14, 2020, 11:18:12 am »
No idea how that actually works under the hood.
The mechanics of it is very simple - since both GPU and NVMe SSD are PCI Express devices, as long as they know each other's bus address, they can talk to each other directly. This feature is called "Bus mastering" and has been a possibility even back in "classic" PCI times. The only caveat with PCI Express is since they are only logically collected to the same bus, but not physically (as PCIE is a point-to-point link, not multidrop like "classic" PCI was), they will need to use a root port to route packets between endpoints. But that routing is happening inside the PCIE root port without any CPU involvement (well, not exactly since PCIE root complex is a part of CPU nowadays, but you get the point).

Offline tszaboo

  • Super Contributor
  • ***
  • Posts: 7874
  • Country: nl
  • Current job: ATEX product design
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #61 on: October 14, 2020, 11:28:06 am »
FPGAs are not really good at doing anything. They are just good enough of doing many things. If there is money to be made somewhere, they will make an ASIC for it.
Actually FPGAs are really good at getting advanced DSP solutions into the market place.
You know what I meant.

It gets worse, you do not need to even license any FPGA unless you want a complete fabric.  The degree of which AMD already has a chunk of firmware gate programable re-configurability already built into their GPUs and CPUs.  They do not need more.  It is standard in such large designs to allow for bug work-arounds once discovered in the field so entire dies will not need re-designing.

On the other hand, 1 reason alone exists at this level.  AMD may be already licensing some core parts of their CPUs and GPUs from third party vendors.  (IE PCIX/USB/anything other)  If Xilinx has a suitable replacement core within their IP, nothing to do with the main FPGA fabric, say just the die transistor layout on the serdes and PLL on the IO pins, a purchase may work out in the books as a profit to switch over.  In this case, Xilinx has a chance of no longer advancing as AMD just got what they want to make their CPUs and GPU more profitable.

They probably want to package the FPGA on the same interposer, like their CPUs. Chiplet design. The Chiplet design makes a lot of sense, and it was a big win for them, and they want to be able to place more stuff in one package. I get that.


I would not call it a mistake of buying ATI.

It was a perfectly good product back when AMD chips still held on against Intel the APU offering of AMD Core+Radeon graphics on the same die. It was capable of offering decent gaming performance at a fraction of the price.
I'm quite sure there were almost dozens of people who were happy that AMD made APUs. In the meantime their market share tanked, they had to sell their fabs, and beg for money from investors.

The reason they MUST do something about FPGAs is that they are just about starting to be looked at seriously for the mainstream(ish) data enter market, where the new trend is to add as much smarts as possible to the NIC in order to offload as much as possible from high speed networking (100G and up), as these speeds put a significant load on the CPU, and I am not only talking about network stack level stuff but also potentially layer 7 processing. Here having a real FPGA inside the cpu would be a godsend.
So they spent 30B to have an in house NIC? Like this?
https://www.broadcom.com/products/ethernet-connectivity/network-adapters/100gb-nic-ocp/p2100g

" combining a high-bandwidth Ethernet controller with a unique set of highly optimized hardware acceleration engines to enhance network performance and improve server efficiency."
 

Online coppice

  • Super Contributor
  • ***
  • Posts: 9307
  • Country: gb
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #62 on: October 14, 2020, 02:26:14 pm »
FPGAs are not really good at doing anything. They are just good enough of doing many things. If there is money to be made somewhere, they will make an ASIC for it.
Actually FPGAs are really good at getting advanced DSP solutions into the market place.
You know what I meant.
I have no clear idea what you mean. I don't know you. I can't read prior knowledge I have of your character into anything you write. All I can work from is the wording of your messages. "FPGAs are not really good at doing anything" is a pretty clear statement. I can't tell whether you were actually thinking clearly when you worded that.
 

Offline filssavi

  • Frequent Contributor
  • **
  • Posts: 433
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #63 on: October 14, 2020, 03:52:05 pm »
So they spent 30B to have an in house NIC? Like this?
https://www.broadcom.com/products/ethernet-connectivity/network-adapters/100gb-nic-ocp/p2100g

" combining a high-bandwidth Ethernet controller with a unique set of highly optimized hardware acceleration engines to enhance network performance and improve server efficiency."

I did not really expand to much on the subject, however they will spend 30B to have this:

https://www.servethehome.com/what-is-a-dpu-a-data-processing-unit-quick-primer/

The core of these devices is still a high speed NIC, however it can do a little bit more, such as communicate with storage clusters on the network and emulate an nvme interface, so that the guest OSes on the main processor can treat it as local, they can offer high speed offloading of network processing (things like packet filtering, sorting, etc, on multiple 100/200 Gbit interfaces without occupying half of the main cpu time, etc.

as you see  in the article nVidia is already mooving in this space, thus AMD needs to move fast if they want a piece of this undoubtedly huge pie
 

Offline FenTiger

  • Regular Contributor
  • *
  • Posts: 88
  • Country: gb
 

Offline Karel

  • Super Contributor
  • ***
  • Posts: 2257
  • Country: 00
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #65 on: October 27, 2020, 12:33:44 pm »
Indeed:

"AMD and Xilinx today announced they have entered into a definitive agreement for AMD to acquire Xilinx in an all-stock transaction valued at $35 billion."

https://ir.amd.com/news-events/press-releases/detail/977/amd-to-acquire-xilinx-creating-the-industrys-high

 

Offline SilverSolder

  • Super Contributor
  • ***
  • Posts: 6126
  • Country: 00
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #66 on: October 27, 2020, 12:42:07 pm »

They are all gearing up to compete with something very big...
 

Offline ali_asadzadeh

  • Super Contributor
  • ***
  • Posts: 1929
  • Country: ca
Re: AMD Reportedly In Advanced Talks To Buy Xilinx for Roughly $30 Billion
« Reply #67 on: November 01, 2020, 07:10:13 am »
Sad But true :'(
ASiDesigner, Stands for Application specific intelligent devices
I'm a Digital Expert from 8-bits to 64-bits
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf