Intel at ISSCC 2015: Reaping the Benefits of 14nm and Going Beyond 10nm
by Ian Cutress on February 22, 2015 3:00 PM ESTAs part of the International Solid-State Circuits Conference every year, Intel brings forward a number of presentations regarding its internal research. The theme this year at ISSCC is ‘Silicon Systems – Small Chips for Big Data’, and Intel previewed a number of their presentations with the media and analysts last week before the conference. Hot topics being talked about include developments for 14nm features that could potentially be ported to real world devices, technological developments at 22nm using Tri-Gate CMOS for adaptive, autonomous and resilient systems and also a few quick words regarding 10nm and beyond.
Taking Moore’s Law Beyond 10nm
Part of ISSCC will be a round table with representatives from Intel, Qualcomm, a couple of industry companies and university researches discussing how 10nm will attack Moore’s Law and how it can be extended below to 7nm. The graphs shown at IDF 2014 make their presence again, showing cost per square mm and cost per transistor, courtesy of Mark Bohr (Intel Senior Fellow, Logic Technology Development):
The fact that 14nm resulted in a better-than-the-trend drop in cost per transistor was explained as some internal smart reworking, making sure that certain areas of the dies require different masking and by optimizing the masking process, the cost can be reduced rather than relying on fewer general masks (but it is still a balance).
It was explained that while 10nm will have more masking steps than 14nm, and the delays that bogged down 14nm coming late to market will not be present at 10nm – or at least reduced. We were told that Intel has learned that the increase in development complexity of 14nm required more internal testing stages and masking implementations was a major reason for the delay, as well as requiring sufficient yields to go ahead with the launch. As a result, Intel is improving the efficiency testing at each stage and expediting the transfer of wafers with their testing protocols in order to avoid delays. Intel tells us that that their 10nm pilot lines are operating 50% faster than 14nm was as a result of these adjustments. So while the additional masking steps at 10nm which ultimately increases fixed costs, Intel is still quoting that their methods results in a reducing in terms of cost per transistor without needing a completely new patterning process. EUV lithography was discussed, but Intel seems to be hoping to avoid it until it is absolutely necessary, as EUV development so far has been slower to progress than expected.
10 nm will come with innovation, and getting down to 7 nm will require new materials and processes which Intel wants to promote as a progressive integration between process development and the product design teams. New materials and device structures are key elements on that list, and while III-V materials were discussed in the ISSCC preview, no exact details were given.
Along with addressing the general challenges in getting down to 7nm, Intel's research group is also looking to address future integrated systems, specifically 2.5D (separate dies on an interposer) and 3D (stacked dies). While 2.5D and 3D are not direct replacements for smaller manufacturing nodes - they just allow you to lay down more transistors at a higher cost - they are being examined as potential solutions for containing power consumption in certian situations (2.5D) or in building better size-limited integrated topologies (3D). Specifically, Intel is looking at scenarios where logic blocks using different fabrication methods are laid out in their own layers and stacked, rather than implemented on a single layer of a single die (think memory, digital logic, and analog communications on a single chip).
These kinds of configuration may appear in smartphones, tablets, or other devices that use highly-integrated chips where multiple types of fabrication would be necessary, and where manufacturers can charge the premium price necessary to cover the additional costs. We have discussed in the past how 2.5D and 3D configurations can improve performance, especially when it comes to memory density and graphics bandwidth, however the price increase (according to Intel) will result in that premium, even at high volume.
Reaping the Benefits of 14nm
Intel is highlighting a trio of papers at ISSCC regarding 14nm. One of the areas ripe for exploitation at 14nm is data transfer, especially transmitters. To that extent, Intel is showing a 14nm Tri-Gate CMOS serializer/deserializer transmitter capable of 16-40 Gbps, using both the NRZ (non-return zero) and PAM4 (Pulse-Amplitude Modulation with 4 levels) modes within a 0.03 millimeter squared die area.
Also on data transfer is a paper regarding the lowest power 10Gb/s serial link and the first complete serial link using 14nm Tri-Gate CMOS. Intel has working silicon at 14nm showing a 59mW power consumption within 0.065 millimeters squared die area that configures the committed data rate to provide the cleanest data response.
Perhaps the most exciting 14nm development is in the form of memory, with Intel describing in-house 84Mb SRAM design that uses the world’s smallest bitcell (0.050 micron squared). At 14nm it represents a doubling of the density at 14.5 Mb per square millimeter, but also provides substantially lower minimum voltage for a given frequency compared to the previous 22nm process. As shown in the graph in the slide, 0.6V is good for 1.5 GHz, but it can scale up to 3 GHz. It is also worth noting that the 14nm yield gradient is more conducive to lower voltage operation compared to the 22nm process. While it seems odd to promote an 84Mb (10.5 MB) design, Intel discussed that it can be scaled up over 100 Mb or more, making it a better solution for embedded devices rather than something like Crystal Well on desktop.
Still Developing on 22nm
While 14nm is great for density, lower voltage and lower power, other features on die are often produced at a looser resolution in order to ensure compatibility but it also offers a great research platform for testing new on-die features to be scaled down at a later date. To this extent, Intel Labs is also presenting a couple of papers about in-house test chips for new features.
The first test chip concerns data retention within register files. Depending on the external circumstances such as temperature and age, this adaptive and resilient domino register file testchip is designed to realign timing margins and detect errors as they occur and adjust the behavior in order to compensate. The logic that Intel is presenting is designed to also cater for die variation and voltage droop, making it more of a universal solution. On a higher level it sounds like the situation when NAND flash gets old and the onboard controller has to compensate for the voltage level margins.
The second test-chip being described brings the situation down to Intel’s execution units in its graphics and dealing with fast, autonomous and independent dynamic voltage scaling. The use of a combined low-dropout regulator (LDO) for low voltages, such as at idle, and a switched capacitor voltage regulator (SCVR) for high voltages allow the appropriate current injection to deal with voltage droop as well as resulting in a large energy reduction. When applied, this should allow for either a power drop at the same frequency, or a higher frequency at the same voltage. Currently the numbers provided by Intel are all on internal silicon rather than anything in the wild, and will be examined at smaller nodes in due course.
Intel at ISSCC
ISSCC always throws out some interesting information about what is actually going on under the hood with the silicon we use almost every day, as we tend to think about it as a black box that slowly gets better over time. In reality, new features are fully researched and documented in order to be included in the next model, as well as trying to keep a balance of power usage and efficiency. On the CPU architecture side of the equation, we reported that Broadwell features needed to show a 2% performance or efficiency improvement for every 1% increase in power, making that advancement steeper than the 1:1 previously required. For all intents and purposes this means that if the same strategy is applied to 10nm and beyond, we are in for a very interesting time. It was interesting to hear about Intel speeding up on 10nm to avoid the delays occurred at 14nm, as well as thoughts regarding future technologies.
The papers Intel is presenting should be available via the ISSCC website as the presentations take place, along with a few others that pique our interest. This should get us ready for some interesting developments come Intel's Developer Forum later in the year.
55 Comments
View All Comments
jjj - Sunday, February 22, 2015 - link
No ,Intel has chose to rip us off, because they can.Some chipset functionality migrated too . For the GPU we pay and we don't need it. As you mention they make a chip without GPU, they have a chip for us except it's not in the normal price range, it's a lot more so it becomes irrelevant in consumer.
They started to screw us over hard with Gulftown 5 years ago and they have no intention what so ever to change direction or even not make it much worse. We will keep getting less for more as long as nobody bothers them (hell look at die size and pricing for Core M, the actual chip on that module is 82mm2 and , as per AT, the price at launch was $281- great price for a mobile class SoC lol).
They need to be able to make more money while selling less and to afford to waste many billions every year to fail in mobile.So we get heavy marketing BS and crappy chips at crazy prices.
The upside is that anyone that behaves like that, ends up choking on it.
stadisticado - Sunday, February 22, 2015 - link
Or, on the other side of things, we could allow capitalism to operate without getting the government involved. Silicon for computers only accounts for ~25% of WW wafer shipments and continues to shrink. Intel is hardly in a monopolistic position when considering the entire logic and apps processor ecosystem. Further, they continue to charge what the market will bear - since when is that a crime? Finally, you seem to have some misapprehension that cost is directly correlated to area, which leads me to believe you don't understand litho scaling or multi-patterning and how they increase cost gen-on-gen.Disclaimer: jjj is a known troll, but hopefully my comment will be edifying for others.
cactusdog - Monday, February 23, 2015 - link
Nope didn't work. The other guy made a reasonable comment about lack of competition, (which has been a much discussed issue for a few years now) and you degraded to conversation by making political statement.ZeroPointEF - Monday, February 23, 2015 - link
I don't like the high prices that Intel charges, but I think the profits are necessary to fund further R&D. It seems like cost to get smaller is increasing geometrically rather then exponentially.jcromano - Tuesday, March 17, 2015 - link
How do "geometrically" and "exponentially" differ? I always thought they meant the same thing.Geometric Series (with common ratio r=2): 1, 2, 4, 8, 16, . . .
Exponential Series (e^(kn) with k = ln 2, starting from n=0): 1, 2, 4, 8, 16, . . .
Guest8 - Sunday, February 22, 2015 - link
They only lost a billion in mobile. The rest is fixed R&D.TheJian - Monday, February 23, 2015 - link
They lose that every quarter. It's beyond losing 4B a year now. Some are asking them to DROP mobile (I think they should just buy NV and out ARM, well, ARM so to speak), like JP Morgan. If they continue to lose 4B+ per year investors will get nervous at some point.http://blogs.barrons.com/techtraderdaily/2014/05/0...
Google jp morgan intel mobile loss and you can read stories all over. They either need to fab more for others, or buy NV and fab their stuff which would allow them to catch Qcom etc. Imagine a 10nm X1 and you get the point (or whatever NV is on by then) and coming even a bit before others. Imagine the damage they'd do with a 10nm gpu...LOL. Without doing this I believe ARM will slowly kill Intel's ability to invest in R&D to keep up with their fab enemies.
Intel needs to respond before ARM (anyone on ARM, NV, Qcom etc) puts out a 75-100w chip that runs in a usual PC box complete with NVlink gpu from NV. As apps/games amp up on mobile with 64bit now and larger mem, storage etc, we'll eventually see the box I'm talking about to replace WINTEL. Make no mistake they all want a piece of WINTEL's total revenue. That box will probably come with multiple FREE operating systems and a huge drop in cpu price (say $200 vs. Intel's 350 for top consumer cpus). We're talking a tri/quad boot of steamOS, linux (some version), Android etc. Intel margins drop then, windows will sell less (hence the push for crap like common core so they can data-mine 300+ stats on your kids, data the kids tell them about YOU, your religion etc etc instead of sell OS) and they could possibly end up with far less market than they have today. I could easily see ARM owning 20-50% of the desktops in a decade or less. NV is clearly heading toward making a box without PCIE, no need for Hypertransport, x86, Infiniband etc. Nvlink for the bus stuff will be all that is needed, and a great cpu by then to pair with their gpu in a normal 500w etc box. You'll have apps like the full adobe suite etc on there by then, gaming that looks just like a PC etc. Porting is getting easier between ARM/Intel and many games are released on multiple platforms today at once.
Wintel has trouble ahead if Intel can't figure out how to make a deal with Jen/Nvidia, which may be impossible (too much hate and he's said only if he's CEO of Intel after it so far). They can't afford to buy Qcom/Samsung/Apple, and AMD is a loser on all fronts compared to NV now. At some point soon, Intel may not even be able to afford NV as cars, grid (1000 testing it now), and at some point mobile/PC add more to their stock price. As gaming takes over mobile, NV will look better and better there. Currently mobile is not quite pushing things enough to REQUIRE NV/AMD gpu power/driver experience, game dev exp with their hardware etc. If AMD is to survive they need to get their best mobile foot forward before the games get here that require it. Apps will follow shortly after games push the hardware requirements up. I like where we're headed though :) With AMD sucking wind, I'm merrily looking forward to ARM owning a share of the PC pie (meaning any ARM vendor in a PC like box, hopefully NV, speaking as a gamer that is).
I see better pricing ahead as ARM moves up the PC chain from chromebooks/low end notebook type stuff. I could like a tri-boot PC, with no WINTEL inside for $200-300 off (that being free OS+$150 off Intel's $350 chips). We may even get a break on the gpu side if someone emerges that can't be sued to take over AMD's place, meaning ARM, Qcom, Apple, etc. I don't really count IMG.L here, as they make almost nothing compared to the rest yearly. They'll die or be bought as the race heats up or perhaps be sued out of existence (can't fight a R&D war with 60mil a year and a lawsuit on top from NV). We'll have to see if anyone can make a gpu without using NV/AMD/Intel IP. I'd expect an AMD suit once NV's is over assuming they win, if AMD has anything to sue over. Intel is a big question mark here, as I'm not sure about how much gpu IP they have, but they lost to NV so...Who knows.
Having said that, I'm fairly certain all the other players have less gpu IP than Intel for current PC tech (last decade or two, where the patents being trampled came from and now starting to be used fully in mobile). We'll know in a few months on that front I guess as the suit evolves. It wasn't called the 'wild west of patent infringement' by anandtech for nothing ;) Their problem is they all picked on a company who can afford to go to trial for a decade and laugh. This isn't Intel vs. AMD here, as NV can survive 100mil/year or more in a suit easily for a multi-billion payoff overall and licensing their GPU IP forever to them all for even more fees on top of willful infringement. A jury would want to stick it to highly profitable companies like Qcom, Apple, Samsung (especially the last one not being american). Apple will come after they win vs. the others or Apple will be smart enough to deal if it ends really ugly for the others.
At any rate, Intel can't give their chips away forever, especially with fabs catching up and an ARM Armada working together to take down Wintel. I'd include Valve/SteamOS here too in the arm group as I think it will be running on NV socs soon (Gabe hates MS, their store, BillG etc). That is a no-brainer for NV, and even Valve if they want DirectX/Windows dead, and linux/steamOS to take over. Valve doesn't really care if Intel dies, but ARM is the way to help kill windows/DirectX for sure and push stuff like OpenGL for all. He's not a huge fan of consoles either, so again go arm, and get great GPU's over there in a PC like box, then apps.
Speedfriend - Monday, February 23, 2015 - link
1.5bn windows users able to buy windows laptop for $250 couldn't give a sh*t about an aRM box with some crappy free OS on it. Ther box you are talking about would be in compatition with PS4 and XBox One, not normal PCs. Wintel exists because despite everyone's moaning, it actually works. I have 10 applications open at once on my work PC and hardly every swithc it off. Nothing crashes unlike my iPad or Android tabelts where apps crash daily, and where any sort of multitasking is a joke. I have moved to a windows tablet now because it is just better.Klimax - Monday, February 23, 2015 - link
TheJian: Keep dreaming. Won't happen You ignored way too many things, skipped inconvenient facts, used extremely faulty basis for faulty extrapolation or outright jumped to your favorite conclusion. ARM won't have much further success nor Android. (Even where they could have, Intel and MS already have taken care of)Your post is about has about 1% relation with reality.
Guest8 - Monday, February 23, 2015 - link
No the loses are still $4 billion a year this year it is expected to decrease as the contra revenue decreases. As an Intel stock holder I know the breakdowns and I am not nervous at all. I understand developing IP from scratch and design wins take time and money. Their Moorefield SoC is already at the level of A7 / Snapdragon 805. All of your other fantasies aren't going to happen. No one is going to catch Intel at this point in x86. ARM is 4+ generations behind in server chip design. That's at least a decades lead if ARM can make advancements every 2 years like Intel can.http://www.forbes.com/sites/kurtmarko/2014/12/10/a...
ARM will not be able to scale up unless Samsung makes it happen which is still a long shot. Their "14nm" process is only a half shrink which means it is not as efficient as Intels 14nm and costs are going up instead of down unlike Intel. Technical factors aside you should know the economics do not work. Great quote from Russ Fisher former industry insider at investment site seekingalpha.com
"The foundry industry is spending about $25 billion per year to supply Apple and QCOM with about a total of $12 billion per year in product ($7 billion to Apple and $5 billion to QCOM) That can't work. The author should write about what a bankrupt and hopeless strategy that is. Then to make matters worse, Apple bounces their business, (or plans to) from TSMC to Samsung and back again. How can that work?"
With foundries scrambling for the limited amount of profit dollars from Apple and Qualcomm there isn't enough money on the comparatively low margin ARM SoC business to fund R&D like Intel can from the x86 business they dominate. Intel just reported a record breaking revenue quarter, which includes the negative revenue from mobile. Looks like they can afford to keep with their strategy. Intel has shown x86 can scale down ARM has yet demonstrate the ability to scale up. Even in Chromebooks reviewers noted the x86 powered variants can handle tasks much faster than ARM based ones. You are welcome to ignore economic realty and keep dreaming.