Intel Whittles Down AI Portfolio, Folds Nervana in Favor of Habanaby Ryan Smith on February 3, 2020 10:00 PM EST
Over the past several years, Intel has built up a significant portfolio of AI acceleration technologies. This includes everything from in-house developments like Intel’s DL Boost instructions and upcoming GPUs, to third party acquisitions like Nervana, Movidius, and most recently, Habana labs. With so many different efforts going on it could be argued that Intel was a little too fractured, and it would seem that the company has come to the same conclusion. Revealed quietly on Friday, Intel will be wrapping up its efforts with Nervana’s accelerator technology in order to focus on Habana Labs’ tech.
Originally acquired by Intel in 2016, Nervana was in the process of developing a pair of accelerators for Intel. These included the “Spring Hill” NNP-I inference accelerator, and the “Spring Crest” NNP-T training accelerator. Aimed at different markets, the NNP-I was Intel’s first in-house dedicated inference accelerator, using a mix of Intel Sunny Cove CPU cores and Nervana compute engines. Meanwhile NNP-T would be the bigger beast, a 24 tensor processor chip with over 27 billion transistors.
But, as first broken by Karl Freund of Moore Insights, Spring won’t be arriving after all. As of last Friday, Intel has decided to wrap up their development of Nervana’s processors. Development of NNP-T has been canceled entirely. Meanwhile, as NNP-I is a bit further along and already has customer commitments, that chip will be delivered and supported by Intel for their already committed customers.
In place of their Nervana efforts, Intel will be expanding their efforts on a more recent acquisition: Habana Labs. Picked up by Intel just two months ago, Habana is an independent business unit that has already been working on their own AI processors, Goya and Gaudi. Like Nervana’s designs, these are intended to be high performance processors for inference and training. And with hardware already up and running, Habana has already turned in some interesting results on the first release of the MLPerf inference benchmark.
In a statement issued to CRN, Intel told the site that "Habana product line offers the strong, strategic advantage of a unified, highly-programmable architecture for both inference and training," and that "By moving to a single hardware architecture and software stack for data center AI acceleration, our engineering teams can join forces and focus on delivering more innovation, faster to our customers."
Large companies running multiple, competitive projects to determine a winner is not unheard of, especially for early-generation products. But in Intel’s case this is complicated by the fact that they’ve owned Nervana for a lot longer than they’ve owned Habana. It’s telling, perhaps, that Nervana’s NNP-T accelerator, which had never been delivered, was increasingly looking last-generation with respect to manufacturing: the chip was to be built on TSMC’s 16nm+ process and used 2.4Gbps HBM2 memory at a time when competitors are getting ready to tap TSMC’s 7nm process as well as newer 3.2Gbps HBM2 memory.
According to CRN, analysts have been questioning the fate of Nervana for a while now, especially as the Habana acquisition created a lot of overlap. Ultimately, no matter the order in which things have occurred, Intel has made it clear that it’s going to be Habana and GPU technologies backing their high-end accelerators going forward, rather than Nervana’s tech.
As for what this means for Intel’s other AI projects, this remains to be seen. But as the only other dedicated AI silicon comes out of Intel’s edge-focused Movidius group, it goes without saying that Movidius is focused on a much different market than Habana or the GPU makers of the world that Intel is looking to compete with at the high-end. So even with multiple AI groups still in-house, Intel isn’t necessarily on a path to further consolidation.
Source: Karl Freund (Forbes)
Post Your CommentPlease log in or sign up to comment.
View All Comments
Korguz - Tuesday, February 4, 2020 - linkever thought that nervana sucked.. intel realized it... so instead of trying to fix it.. they instead just buy someone else.. and continue on with that. come on hstewart... not every thing intel does is gold.. even your beloved intel screws up with new tech... the bad part of this.. is the undustry lost 2 independant compaies im this area...
HStewart - Wednesday, February 5, 2020 - linkLets say it was a different company and say AMD purchase it and AMD decide to switch companies as this - I almost quarantine that you korquz made a smart decision for the industry. Even though same thing happen - just if it is Intel - it is bad for industry if others it is not.
Korguz - Wednesday, February 5, 2020 - link" i almost quarantine " ???? what does that have to do with this ??
yes.. but the same thing can be said about you.. if amd did this.. you would be all over this saying how much amd screwed up... but you dont appear to be doing that here with intel.. seems to me.. you think its ok that intel did this... and the industry would still have lost 2 indepenant companies..
m53 - Tuesday, February 4, 2020 - linkIntel was the largest investor for Habana. But now it owns 100% share. Here is an article from 2018:
mode_13h - Tuesday, February 4, 2020 - linkAt least, with the cancellation of Knights Mill, Intel seems to have gotten over its not-invented-here syndrome.
That connectivity is pretty crazy, though. Did they license someone else's Ethernet IP? That would be a good argument in favor of it.
zmatt - Tuesday, February 4, 2020 - linkUnless my math is wrong, that 10x 100GbE exceeds the total bandwidth of the 16x PCIe 4.0 bus by a fair margin. How are you supposed to actually use all of that bandwidth?
e1jones - Tuesday, February 4, 2020 - linkMost likely because it all stays on the card and that bandwidth is to scale to other cards/nodes.
Yojimbo - Tuesday, February 4, 2020 - link7 of those connections are used for internal node communication. And the data doesn't go over the PCI express bus. they use RoCE RDMA over the ethernet to move the data around.
mode_13h - Wednesday, February 5, 2020 - linkAs others have said, it's for inter-node connections. Probably in a fashion comparable to NVLink.
Nvidia's Tesla V100 has 6 links @ 25 GB/sec per link, per direction. So, aggregate of 150 GB/sec per chip, per direction.
By comparison, 10x 100 Gb/sec = 125 GB/sec per direction. So, almost on par with Nvidia's datacenter/HPC GPU from 2.5 years ago. Just to put it in perspective.
peevee - Tuesday, February 4, 2020 - linkWhy did they buy Habana when they have already had Nervana?
Somebody at Intel got a nice kickback.
Intel is a headless chicken.