Author Topic: Intel unveils a new Xeon chip with integrated FPGA  (Read 12593 times)

0 Members and 1 Guest are viewing this topic.

Offline miguelvpTopic starter

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Intel unveils a new Xeon chip with integrated FPGA
« on: June 20, 2014, 05:37:07 am »
http://www.guru3d.com/news_story/intel_to_combine_xeons_with_programmable_fpga.html

http://www.extremetech.com/extreme/184828-intel-unveils-new-xeon-chip-with-integrated-fpga-touts-20x-performance-boost

http://www.theregister.co.uk/2014/06/18/intel_fpga_custom_chip/

No disclosure on who the FPGA manufacturer is.
Aimed at data centers but I can see other uses apart from cloud computing.

Edit: Since Intel is developing Altera's 14nm FPGAs using Intel's tri-gate process for Altera's Stratix 10 FPGAs I think I can guess who the vendor is :)

Boring video for most:




but @3:50 when the Altera guy mentions the SoC based products the Intel guy wakes up a little, and future products are mentioned many times by the Intel guy.

Stratix 10 impressive performance info:
http://www.altera.com/devices/fpga/stratix-fpgas/stratix10/stx10-index.jsp
« Last Edit: June 20, 2014, 06:24:34 am by miguelvp »
 

Offline jeremy

  • Super Contributor
  • ***
  • Posts: 1079
  • Country: au
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #1 on: June 20, 2014, 11:27:05 am »
Interesting. But I wonder, some companies will want lots of FPGA and not much CPU, and others will want lots of CPU and a little bit of FPGA. When you are buying one or two processors it doesn't matter if you buy a little too much CPU power, but a whole datacentre full? That cost will add up fast.

Seems like a bit of a waste to put them together, why not just make cheap PCIE cards? I can't imagine PCIE latency will be a big deal for these huge datamining projects, they just need raw bandwidth.

The Zynq and co are interesting to me as hard CPUs connected to completely custom hardware, not as a supercomputer.

Edit: microsoft agrees? http://www.wired.com/2014/06/microsoft-fpga/
 

Offline miguelvpTopic starter

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #2 on: June 20, 2014, 01:21:17 pm »
The FPGA is linked to the CPU in a manner that it has fast access to the memory. So Intel's approach gives more performance than what MS has done.

Think about implementing network protocols in hardware, or Erlang and any other functional language producing FPGA fabric that can decode the network stream faster and communicate between modules faster.

Also in one of those articles it mentions that Intel has already done custom chips for Google and Amazon, maybe it's not unproven tech. And I really doubt Intel just doing this without proper research.
« Last Edit: June 20, 2014, 01:23:50 pm by miguelvp »
 

Offline dannyf

  • Super Contributor
  • ***
  • Posts: 8221
  • Country: 00
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #3 on: June 20, 2014, 02:03:07 pm »
Quote
Seems like a bit of a waste to put them together,

Agreed.

Quote
why not just make cheap PCIE cards?

Maybe they are looking for extreme data bandwidth with the onboard fpga stuff. Otherwise, pcie is pretty fast - the graphics card do that for example.
================================
https://dannyelectronics.wordpress.com/
 

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #4 on: June 20, 2014, 07:10:27 pm »
Think about implementing network protocols in hardware, or Erlang and any other functional language producing FPGA fabric that can decode the network stream faster and communicate between modules faster.

I realize this is just an example, but I'm curious what (if any) benefit there might be?  Playing devil's advocate here for the sake of conversation...

Using the network protocol stack as an example, there are really only a few things that can effectively be accelerated.  Namely:  Address recognition, CRC calculation, fragment handling, and retransmit queues.  A lot of that is already accelerated on high-performance network cards, and has been since ... like ... the 3com 3C905 era.

Everything else has to involve the CPU because it's application-specific.  E.g., you can't offload NFS / CIFS file copies to external hardware because the data stream has to come from a file, in a filesystem, at a logical block address from a disk that requires a controller (and its associated drivers) to communicate.  I.e., far from a simple DMA operation.

Bearing in mind this is just an off-the-cuff theoretical, I wonder how many applications don't have this same kind of CPU-bound bottleneck somewhere.  I'm not an FPGA expert by a long shot, but I've always gotten the impression they aren't good for processing, just simple logic and I/O at high speed.  So what kind of offload-able computation has that kind of signature?

Thoughts?
 

Offline MacAttak

  • Supporter
  • ****
  • Posts: 683
  • Country: us
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #5 on: June 20, 2014, 09:19:55 pm »
Cryptography is one possible application that comes to mind. General-purpose CPU's don't seem to be very efficient at that, while FPGA can do much better. That's the whole reason the cryptocurrency tech went to FPGA (and later to ASIC).
 

Offline jeremy

  • Super Contributor
  • ***
  • Posts: 1079
  • Country: au
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #6 on: June 20, 2014, 11:54:09 pm »
Quote from: SirNick link=topic=32777.msg465324#msg465324
...

Bearing in mind this is just an off-the-cuff theoretical, I wonder how many applications don't have this same kind of CPU-bound bottleneck somewhere.  I'm not an FPGA expert by a long shot, but I've always gotten the impression they aren't good for processing, just simple logic and I/O at high speed.  So what kind of offload-able computation has that kind of signature?

Thoughts?

The classic example I think of is digital filtering. As long as you can generate more multiply accumulate blocks, the computational complexity of an FIR or IIR is O(1)! However, you do get O(n) latency. Most applications I know of trade latency for bandwidth.

Cryptography is one possible application that comes to mind. General-purpose CPU's don't seem to be very efficient at that, while FPGA can do much better. That's the whole reason the cryptocurrency tech went to FPGA (and later to ASIC).

The reason FPGAs are good at encryption is because the encryption was specifically designed to be fast in hardware, not because they are inherently better. For a counterexample see scrypt, which was designed specifically to be slow in raw gates to prevent brute forcing.

I still think I would prefer separate devices instead of a CPU/FPGA hybrid in this circumstance. Although software defined networking is interesting, I think ASIC based networking will always win for stupendous latency/bandwidth, we just need to develop the appropriate standards for what google/Microsoft/etc want. You get faster speeds (in terms of bits/s) by making more assumptions, not less, which makes the algorithms a prime example for ASIC design.

And I don't believe an FPGA would be more useful than a 100 core CPU in most parallel computing applications.

Don't get me wrong, FPGAs are awesome, but filling a data centre with them so that you can constantly reprogram your networking protocols just seems like either you didn't plan your protocols very well, or you are well aware that you will be replacing them all every 2-3 years at high cost when cheap asics overtake you. In a data centre you have a significant economies of scale advantage, use it!  ^-^
 

Offline Stonent

  • Super Contributor
  • ***
  • Posts: 3824
  • Country: us
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #7 on: June 21, 2014, 01:29:34 am »
So how is this FPGA going to like being cooked by its neighbor?  To me, it doesn't seem like a very efficient way to deal with the heat.
The larger the government, the smaller the citizen.
 

Offline Zad

  • Super Contributor
  • ***
  • Posts: 1013
  • Country: gb
    • Digital Wizardry, Analogue Alchemy, Software Sorcery
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #8 on: June 21, 2014, 02:47:14 am »
If documents relating to it become public, then it is probably not cryptography but scientific crunching. You can bet your bottom Rouble that if its crypto, then they will want to keep details as hush-hush as possible.


Offline miguelvpTopic starter

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #9 on: June 21, 2014, 07:30:39 pm »
If documents relating to it become public, then it is probably not cryptography but scientific crunching. You can bet your bottom Rouble that if its crypto, then they will want to keep details as hush-hush as possible.

cryptography algorithms are public domain. Some requiring multiple passes over the data. They could decrypt the data on the fly so that the CPU doesn't need to do it.
 

Offline Mechatrommer

  • Super Contributor
  • ***
  • Posts: 11634
  • Country: my
  • reassessing directives...
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #10 on: June 21, 2014, 09:59:55 pm »
yeah crypto are pretty matured and quite amazing nowadays. you can give the code to anyone you like they wont be able to follow it, as that crypto guy said it. 3d rendering had been taken by the GPU. what else? i vote for bitcoin mining.
Nature: Evolution and the Illusion of Randomness (Stephen L. Talbott): Its now indisputable that... organisms “expertise” contextualizes its genome, and its nonsense to say that these powers are under the control of the genome being contextualized - Barbara McClintock
 

Offline Rasz

  • Super Contributor
  • ***
  • Posts: 2616
  • Country: 00
    • My random blog.
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #11 on: June 23, 2014, 01:25:30 pm »
Think HFT. Those guys love low latency so much they dont even use linux network stack or drivers, instead they talk directly to the NIC from userspace. They will love FPGA inside CPU long time. FPGA will make it possible to compute more packets per second thus stealing more money^^^provide more market liquidity!
Who logs in to gdm? Not I, said the duck.
My fireplace is on fire, but in all the wrong places.
 

Offline EEVblog

  • Administrator
  • *****
  • Posts: 37740
  • Country: au
    • EEVblog
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #12 on: June 23, 2014, 01:32:59 pm »
Don't get me wrong, FPGAs are awesome, but filling a data centre with them so that you can constantly reprogram your networking protocols just seems like either you didn't plan your protocols very well, or you are well aware that you will be replacing them all every 2-3 years at high cost when cheap asics overtake you. In a data centre you have a significant economies of scale advantage, use it!  ^-^

I thought data centres were obsessed with power consumption / MIPS? How is using inefficient FPGA's going to help?
 

Offline Jeroen3

  • Super Contributor
  • ***
  • Posts: 4078
  • Country: nl
  • Embedded Engineer
    • jeroen3.nl
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #13 on: June 23, 2014, 01:46:37 pm »
Computer vision might have an advantage with this. Basic image operation (add, multiply, shift) are unnecessarily slow on cpu's. What if you could do those things to a full HD image in a few clocks instead of a few million.
 

Offline Rasz

  • Super Contributor
  • ***
  • Posts: 2616
  • Country: 00
    • My random blog.
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #14 on: June 23, 2014, 01:51:20 pm »
GPUs take care of heavy computation in data centers now. FPGAs are only used when you need computation and low latency.

2-3 years? :) try 1-2 years, HFT prints money, those people dont eff around, hardware is almost free to them, quants earn in millions.
Who logs in to gdm? Not I, said the duck.
My fireplace is on fire, but in all the wrong places.
 

Online tszaboo

  • Super Contributor
  • ***
  • Posts: 7383
  • Country: nl
  • Current job: ATEX product design
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #15 on: June 24, 2014, 07:42:28 am »
I think it is the right aproach. Todays CPUs are way to complex, most of the transistors are rarely used. Intel was solving the frequency issue by increasing the complexity and building in stuff for new instructions. But we are already at the SLIW (stupidly long instruction word), at least some fine programmer can decide what to accelerate, and the FPGA part can always be reconfigured on the go by copying some megabytes of data.
 

Offline mikeselectricstuff

  • Super Contributor
  • ***
  • Posts: 13747
  • Country: gb
    • Mike's Electric Stuff
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #16 on: June 24, 2014, 08:34:31 am »
Don't get me wrong, FPGAs are awesome, but filling a data centre with them so that you can constantly reprogram your networking protocols just seems like either you didn't plan your protocols very well, or you are well aware that you will be replacing them all every 2-3 years at high cost when cheap asics overtake you. In a data centre you have a significant economies of scale advantage, use it!  ^-^


I thought data centres were obsessed with power consumption / MIPS? How is using inefficient FPGA's going to help?
Why would they be inefficient - OK maybe compared to an ASIC, but compared to a general-purpose processor, an agorithm running on an FPGA is likely to be a lot more efficient as only the logic needed to run that process is being clocked, a lot can be done in parallel, and clocks controlled to a very fine level to improve efficiency. It can also implement application-specific pipelining, so instead of intermediate results being written & read from memory or cache, they can be stored locally to the logic that uses them
I saw a figure in one article on this that a 50% reduction in overall power in a datacentre running some custom algorithms in PCI FPGA boards. 
Youtube channel:Taking wierd stuff apart. Very apart.
Mike's Electric Stuff: High voltage, vintage electronics etc.
Day Job: Mostly LEDs
 

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #17 on: June 25, 2014, 12:33:09 am »
I think it is the right aproach. Todays CPUs are way to complex, most of the transistors are rarely used. Intel was solving the frequency issue by increasing the complexity and building in stuff for new instructions. But we are already at the SLIW (stupidly long instruction word), at least some fine programmer can decide what to accelerate, and the FPGA part can always be reconfigured on the go by copying some megabytes of data.

Except, wouldn't the FPGA basically be an exclusive-access device?  You can't just give applications direct programming access without them clobbering each other, so it'll have to be owned by the OS.  If it becomes common to have an FPGA on-chip, the OSes will start using them to offload some generic task(s), turning them into low-efficiency but programmable ASICs.  (So why not just concentrate on better peripheral hardware, like the RNGs, watchdogs, crypto accelerators, packet processors, etc...)

To reserve them as application-specific is to leave them idle nearly all of the time, since relatively few people would know what to do with them, if they knew they were there at all.  They would take up die space that could be better used as more cores, more sophisticated pipelining, more cache, a GPU, etc.  Those who really stand to benefit could just use PCI-e cards or whatever.  Not quite as fast as being on-die, sure, but you could right-size them for the task at least.

Makes no sense to me at all.
 

Offline miguelvpTopic starter

  • Super Contributor
  • ***
  • Posts: 5550
  • Country: us
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #18 on: June 25, 2014, 12:49:30 am »
Altera has Open CL support for some of their FPGAs, so you just program them like you would a GPU but with more performance and less power requirements. There is also a C to FPGA but a permanent license from Impulse is around $10K but more will come once the space opens up because of he hardware being available.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #19 on: June 25, 2014, 01:45:12 am »
Don't get me wrong, FPGAs are awesome, but filling a data centre with them so that you can constantly reprogram your networking protocols just seems like either you didn't plan your protocols very well, or you are well aware that you will be replacing them all every 2-3 years at high cost when cheap asics overtake you. In a data centre you have a significant economies of scale advantage, use it!  ^-^


I thought data centres were obsessed with power consumption / MIPS? How is using inefficient FPGA's going to help?
Why would they be inefficient - OK maybe compared to an ASIC, but compared to a general-purpose processor, an agorithm running on an FPGA is likely to be a lot more efficient as only the logic needed to run that process is being clocked, a lot can be done in parallel, and clocks controlled to a very fine level to improve efficiency. It can also implement application-specific pipelining, so instead of intermediate results being written & read from memory or cache, they can be stored locally to the logic that uses them
I saw a figure in one article on this that a 50% reduction in overall power in a datacentre running some custom algorithms in PCI FPGA boards.
Or we could read the microsoft paper in post #2:
Edit: microsoft agrees? http://www.wired.com/2014/06/microsoft-fpga/

It is already true that relatively few people know what to do with any hardware inside their computer. Other than make use of it of course. It isn't an actual disadvantage to add yet another thing inside. As long as the vendor has a customer with a problem for which the FPGA provides a marketable solution the concept has a defined reason to exist. That's all it needs. You have to assume Intel has some feedback from its customers to have gone to the expense of developing this idea. This concept doesn't prevent ongoing development of better peripheral hardware. As to the FPGA taking up die space that could be "better" used  as more cores and so on. Well that's just your view. It may not hold as true for someone else. If this idea fits well with the needs of enough people then it will survive and thrive. If not, it will fade away. There is no need to debate the merits or otherwise. You just need to wait and see.
Server chips come in a much wider range of core counts, clock speed, and memory support options. So adding various sizes of FPGA co-processors as separate SKUs seems plausible.
 

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #20 on: June 25, 2014, 01:50:52 am »
Don't get me wrong, I'm not trying to impose a viewpoint on the world here.  I just don't get it.  I haven't yet seen an example use in this discussion that gave me that "ah hah" moment, so it seems like a pointless waste of silicon.  To me.

It's kinda like the Donkey Congas that came out for one of the older Nintendo consoles.  I mean c'mon, who's going to use those for any other game?  There will be few applications that are just really well suited, so it won't be the must-buy accessory, and likewise game developers can't assume the customer base already has them, so it's a one-trick pony.  On-CPU FPGAs seem like the same deal.  I'm just trying to find the (programmable, har har) logic behind that decision.
 

Offline Someone

  • Super Contributor
  • ***
  • Posts: 4531
  • Country: au
    • send complaints here
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #21 on: June 25, 2014, 01:56:02 am »
To reserve them as application-specific is to leave them idle nearly all of the time, since relatively few people would know what to do with them, if they knew they were there at all.  They would take up die space that could be better used as more cores, more sophisticated pipelining, more cache, a GPU, etc.  Those who really stand to benefit could just use PCI-e cards or whatever.  Not quite as fast as being on-die, sure, but you could right-size them for the task at least.

Makes no sense to me at all.
GPU offloading hasn't seen much uptake in the server space and where each server can end up being very specialised in its workload a generic FPGA that comes cheaply could end up doing some interesting accelerations. For consumers all the problems you identified are very valid, so these embedded FPGAs (as they are today) may never reach beyond server and workstation platforms.

But imagine a future where the FPGA evolves to become an arbitrary co-processor with fast reconfiguration, and there are many of these generic co-processors available through time sharing (existing fringe FPGA technology) or just mass duplication of 16,32, ...1024? cores. You could allocate them like threads. But that would need a stable architecture to push development and while intel has tried to hold x86 to it self, could they do it again and build the industry standard FPGA co-processor?
 

Offline SirNick

  • Frequent Contributor
  • **
  • Posts: 589
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #22 on: June 25, 2014, 02:12:34 am »
Hmmm, I guess that could have interesting applications.  Assuming the FPGA architecture remained consistent enough that pre-synthesized "libraries" enjoyed the same kind of cross-family compatibility that compiled binaries do.  Thanks for the insight.
 

Offline jeremy

  • Super Contributor
  • ***
  • Posts: 1079
  • Country: au
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #23 on: June 25, 2014, 02:30:31 am »
Really, I think the problem is one of cost. Firstly, the previously mentioned Stratix 10 (>$20 000 each I think I have seen) can cost as much as 160 Xeon cores ($2000/16 cores). Unless my application was just sending 400Gb/s serialised data between two ICs, I would definitely rather have the Xeons as a parallel computing platform. The FPGAs might be better in terms of bandwidth/watt, but I'm not sure the benefit would be enough to recoup your losses on the initial investment before you inevitably have to upgrade in a few years.

From a cost standpoint, I can't see why would you want to stick two expensive things inseparably together? Keep them apart and you can upgrade one without the other or scale in either direction without much loss. Having them together sounds like the days of printers with one black and one CMY tank, rather than separate ones.
 

Offline Scrts

  • Frequent Contributor
  • **
  • Posts: 797
  • Country: lt
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #24 on: June 25, 2014, 11:15:22 am »
Really, I think the problem is one of cost. Firstly, the previously mentioned Stratix 10 (>$20 000 each I think I have seen) can cost as much as 160 Xeon cores ($2000/16 cores).

The price you see there is for digikey, farnell, whatever it is. If you'd buy thousands of them, the price would be ridiculously low. E.g. we use FPGA, which costs around 50USD in digikey, but we get it for around 6USD. If we'd increase the volumes, that would be even less and would hit the <10% mark of the price of digikey.

I am pretty sure they are not paying more than 10 USD for that piece of logic on the silicon. This is why:
The silicon is expensive, because it's on the process, that's difficult to manufacture correctly. Now Altera don't have any responsibility for that, thus not investing anything in the silicon itself. This means, that Intel is the only responsible for this and their product. I'd say the FPGA fabric on their CPUs are dirt cheap.
 

Offline jeremy

  • Super Contributor
  • ***
  • Posts: 1079
  • Country: au
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #25 on: June 25, 2014, 11:40:15 am »
Really, I think the problem is one of cost. Firstly, the previously mentioned Stratix 10 (>$20 000 each I think I have seen) can cost as much as 160 Xeon cores ($2000/16 cores).

The price you see there is for digikey, farnell, whatever it is. If you'd buy thousands of them, the price would be ridiculously low. E.g. we use FPGA, which costs around 50USD in digikey, but we get it for around 6USD. If we'd increase the volumes, that would be even less and would hit the <10% mark of the price of digikey.

I appreciate that is the case, which is why I used the 1pc price for the Xeon as well. Since they are both made at the same fab, I imagine that the price would scale similarly.

I am pretty sure they are not paying more than 10 USD for that piece of logic on the silicon. This is why:
The silicon is expensive, because it's on the process, that's difficult to manufacture correctly. Now Altera don't have any responsibility for that, thus not investing anything in the silicon itself. This means, that Intel is the only responsible for this and their product. I'd say the FPGA fabric on their CPUs are dirt cheap.

Ok fair enough, but I think we will have to agree to disagree on this one  ^-^  I have been under the impression that FPGAs are expensive because: yield is bad (in terms of $ loss per failure), die is huge/complex and the market is low volume. While this may solve the last problem, now if you lose the FPGA section of the die, you also lose a potentially perfectly working Xeon too. Or I guess they could just switch off the FPGA and bin it as a plain old CPU. I don't know if the piece of silicon is only worth $10, but I guess I do not have any insight into that (never had a chip fabbed before). Perhaps some other forum members could help out on this one?

But still, this is all beside the point. I want a datacentre full of FPGAs, I just don't want it to be full of Xeons too!
 

Offline RR

  • Contributor
  • Posts: 30
  • Country: cz
Re: Intel unveils a new Xeon chip with integrated FPGA
« Reply #26 on: June 25, 2014, 03:26:38 pm »
Quote
if you lose the FPGA section of the die, you also lose a potentially perfectly working Xeon too

My guess is that they use more dies for each so they can bin them separately.

Intel did this before

 

Or first dualcore or quad core processors was separate die on one substrate too.
 


Share me

Digg  Facebook  SlashDot  Delicious  Technorati  Twitter  Google  Yahoo
Smf