Comments Locked

52 Comments

Back to Article

  • MajGenRelativity - Tuesday, September 19, 2017 - link

    Looks interesting. I hope we'll see 10nm in consumer devices within a couple years
  • SharpEars - Tuesday, September 19, 2017 - link

    I hope we'll see some FPGA goodness in consumer devices in a couple years.
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    What would they be used for?
  • ddriver - Tuesday, September 19, 2017 - link

    More of that "inferior glue" LOL
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    That does not answer my question, as your answer doesn't make any sense.
  • wumpus - Tuesday, September 19, 2017 - link

    It makes sense assuming that there really isn't much reason to put FPGA code in a high volume processor. Of course with this interconnect, it hardly has to be all *that* high volume, but it is still there.

    Crytography/cryptomining is really the only thing I'd expect to see FPGAs do anything in/near a CPU. I'd be a lot more interested if there were some sort of FPGA-based GPU operations, they typically have the wide operations, latency/pipelining, and bandwidth to make FPGA operations really shine. They also have those "one or two little operations" that are optimized to extreme levels.
  • Notmyusualid - Tuesday, September 19, 2017 - link

    @ wumpus

    Nope - The FPGAs can be used to support often-used code, that typically takes much CPU time to do. There is a good write up on it over on IEEE. They 'hard coded' the FPGAs to do these repeatable, but necessary tasks, which were done much quicker than the CPU would normally be capable of - and they got wonderful overall performance gains by doing so.

    I don't think anyone on that project gave a toss about crypto mining. They were trying to accelerate everyday software.
  • Notmyusualid - Tuesday, September 19, 2017 - link

    @ MajGen

    You will come learn ddriver is the local clown here. Try to ignore, if you can.
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    I'm aware :P In any case, I liked your point about the FPGA accelerating everyday software.
  • HStewart - Tuesday, September 19, 2017 - link

    It obvious we need some explanation of what FPGA is and it definitely not glue.

    https://en.wikipedia.org/wiki/Field-programmable_g...

    this technology is typically use to make custom design chips - which can be program from outside.
  • Yorgos - Tuesday, September 19, 2017 - link

    You could do a firmware update in your device, loading the new .bit file on the fpga, and actually doing a h/w update or fix h/w problems.
    The potential is enormous, the industry is not so willing to do it
  • willis936 - Tuesday, September 19, 2017 - link

    That's for several very good reasons. The very first is that FPGAs cost an order of magnitude (or more in many cases) more than an ASIC. They're primarily for development and very expensive, low quantity products. The next and most obvious is that being able to rewrite hardware is dangerous. It would be a target for attackers. It would make even the most well hidden and aggressive keyloggers blush with how much control over a system and how well hidden FPGA based attacks could hide. Also if your chip manufacturer starts churning out updates every day you'll end up with a firefox running in your computer. Hardware can be done right the first time.
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    Valid points!
  • FunBunny2 - Tuesday, September 19, 2017 - link

    -- Hardware can be done right the first time.

    which is why the marginally intelligent "do computers" with a comp. sci. degree. invented just to pander to them.
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    What?
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    A. How often does the average consumer do a firmware update?
    B. I'm not sure it warrants adding a dedicated chip on 10nm in any situation that isn't industrial.
    C. I'm sure you can do a lot, I'm just not aware of it
  • saratoga4 - Tuesday, September 19, 2017 - link

    FPGAs are widely used in low volume electronics where the cost of running an ASIC would be prohibitive. They're not used very much in consumer electronics because they are slower, use more power, and are more expensive than ASICs when mass produced. By the time you get up to consumer electronics volumes, it is usually better/cheaper/faster to get a working ASIC out (possibly based on FPGA prototypes) than to ship everyone an FPGA.
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    I wasn't aware of that. Good to know.
  • willis936 - Tuesday, September 19, 2017 - link

    What would be a real game changer to me is if the development tools around FPGAs became more accessible. After (trying) to use Xilinx ISE I can safely say I'm much more cozy doing things in traditional software. I don't want to spend 100 hours trying to make an IDE behave. Verilog and VHDL are both simple to write and are intuitive to think about for anyone who have spent time with combinational and sequential logic (something that I think is in every electrical engineer's curriculum). Why do the tools need to make it so hard to just implement a design? Hell it would be nice if they had a block diagram mode similar to schematic editors like orcad capture.
  • saratoga4 - Tuesday, September 19, 2017 - link

    >Hell it would be nice if they had a block diagram mode similar to schematic editors like orcad capture.

    These exist but are not very popular because the approach does not scale well to complex devices like would be implemented on an FPGA. They're more useful for devices like CPLDs though were you usually only want to "wire" a bread board sized "circuit".

    The complexity of FPGA tools is a reflection of their target market, which is people designing very complex hardware devices.
  • willis936 - Tuesday, September 19, 2017 - link

    I don't see why it wouldn't scale up. Much like simulink (and Not like labview) there should be a code generation option. Furthermore orcad capture style software scales up to systems as large as FPGAs. Abstraction works. There are more and easier ways to become disorganized compared to traditional code so I can see why people would shy away from it. I do however think it'd be incredibly powerful.
  • flgt - Tuesday, September 19, 2017 - link

    Text based design will always be more efficient and maintainable in the hands of a skilled engineer. You actually can enter the design schematically in most of the FPGA vendor tools, but no serious design teams do it on complex designs. Plus there is other peripheral information that must be brought into the design such as timing constraints that make batching up text files a simpler approach. We're not even gonna get into simulation/verification here which is where a lot of the heavy lifting is.

    Tools like Simulink code generation are OK for algorithms but quickly fall apart for peripheral functions you need for real designs. You basically need to be an HDL expert to make sure it's doing what you really intended. It's not the holy grail Mathworks marketing will tell you it is.
  • flgt - Tuesday, September 19, 2017 - link

    Same can be said for software, which is why I wish LabVIEW would go away. It's hard to enforce coding standards on a picture, and you quickly find the developers take forever to do simple things that could be done is a few lines of text source code.
  • willis936 - Wednesday, September 20, 2017 - link

    What would make it a game changer is that it would make FPGA development more accessible. If I wanted to buy a 100 dollar FPGA, plunk down some I/O and logic blocks, tweak a few things, and synthesize, I’d be happy. The number of people capable of working with FPGAs would increase tenfold and maybe clever inventions would from them would become more common.
  • BoyBawang - Tuesday, September 26, 2017 - link

    I am thinking of an Operating system that is FPGA. Bootup speed must be very fast.
  • Notmyusualid - Tuesday, September 19, 2017 - link

    @ SharpEars

    Yep - that is what certainly has my attention too.
  • StevoLincolnite - Tuesday, September 19, 2017 - link

    Still using a 32nm SB-E CPU. So hopefully 10nm Intel CPU's offers the right price/performance to tempt me to upgrade.
  • siberian3 - Tuesday, September 19, 2017 - link

    couple of years would be disaster for intel
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    True, so let's hope they move things along.
  • MrSpadge - Tuesday, September 19, 2017 - link

    That wafer looks more like 300 mm rather than 10 nm ;)
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    Joke? :P
  • BrokenCrayons - Tuesday, September 19, 2017 - link

    Joke and fact all at once! MrSpadge wins the Internet for today.
  • MajGenRelativity - Tuesday, September 19, 2017 - link

    Congratulations on your keys to the Internet! Please lock it away and then throw away the key :P
  • jjj - Tuesday, September 19, 2017 - link

    They were claiming over 3.3GHz for A75 at 250uW per MHz so that's ok.

    Specs wise there was this slide http://img1.mydrivers.com/img/20170919/970972625ab...
  • peevee - Tuesday, September 19, 2017 - link

    "using a new transistor counting methodology"

    What? +2 for each transistor? ;)
  • edzieba - Tuesday, September 19, 2017 - link

    The new methodology = count total transistors per mm^2, rather than picking one arbitrary dimension of a certain axis of a certain process stage and judging everything by that.
  • HStewart - Tuesday, September 19, 2017 - link

    What it looks like, Intel is stacking the transistors on top of each, so size of die does not actual determining factor - but actually number of transistors is.
  • edzieba - Tuesday, September 19, 2017 - link

    No stacking involved. The dies are a single layer, and the 'extra' link dies are purely acting as interconnects.
  • MrSpadge - Tuesday, September 19, 2017 - link

    Designs use transistors of different sizes, depending on what they need to drive. The mobile SoC makers made Intel look bad with their densities, whereas the origin was that on average their applications required smaller transistors than Intels high-frequency designs. So Intel decided on some standardized transistor mix for these comparisons to give neither an advantage nor a disadvantage to anyone. Actual designs will differ in density, but that's true for any fab.
  • name99 - Tuesday, September 19, 2017 - link

    "some standardized transistor mix for these comparisons to give neither an advantage nor a disadvantage to anyone"

    Oh you naive little kitten. There's ALWAYS winners and losers from these sorts of decisions.

    For example memory is more dense than logic. So a metric that privileges memory density over logic density makes a company that ships with lots of logic on the chip look worse. Even if your metric includes both memory and logic transistors, who's to say what the appropriate weighting is?

    There are also higher level arguments. SoCs don't run by transistors alone. You need wiring to connect these; you need somewhat empty space between the transistors, you need clock distribution and pads connecting silicon to metal. Point is, all these things take space, and a metric that measures ONLY the size of a transistor does not capture how efficiently a process does or does nor use this extra space --- a more reasonable metric would cover a larger area.

    What does Intel think about this?
    "Simply taking the total transistor count of a chip and dividing by its area is not meaningful because of the large number of design decisions that can affect it–factors such as cache sizes and performance targets can cause great variations in this value"
    Which I translate as "our individual transistors are tiny, so we're going to publicize that, but our tools for connecting them together suck, so we're not going to discuss how dense the transistors are where it actually MATTERS --- laid out on a real chip".

    But don't worry, there are other metrics one can use!
    FinFETs use multiple fins per transistor to attain high enough drive current. One of TSMC's goals is over the next few years to reduce the number of needed for most transistors for the current 3 or 4 down through 2 and ultimately to 1 (achieved in part by making each fin higher and higher and higher). So there's an easy out for Intel here --- switch to FIN density rather than TRANSISTOR density and, voila, Intel is immediately looking a whole lot better --- so many more fins per sq mm!
  • MrSpadge - Tuesday, September 19, 2017 - link

    > So a metric that privileges memory density over logic density makes a company that ships with lots of logic on the chip look worse. Even if your metric includes both memory and logic transistors, who's to say what the appropriate weighting is?

    Apparently you completely misunderstood the point. This scheme is made to exactly combat what you're suggesting. It's calculating a density metric without taking the actual product produced on the process into account, as that's always going to vary in the fractions of memory, logic etc. Which is just what Intel is saying in the quote you're attacking later on.

    There's really no point trying to compare PROCESSES when different chip designs are involved. That's what the competition did to Intel, and what they want to correct with this. This puts them in a better light, obviously, but not with an unfair advantage. So by "to give neither an advantage nor a disadvantage to anyone" I mean compared to how the different designs should technically be compared, not compared to what we previously had.
  • HStewart - Tuesday, September 19, 2017 - link

    This sounds like to me a significant advancement in technology, but also explains why 10nm is taking so long to developed. Being able to pack twice ( possible 2 1/2 ) transistors in same space is really awesome. This could mean including twice as many cores in the same space.

    But Intel is extremely smart, they realize that such technology is expensive and for some components of the chip - like IO and communications, you don't need such technology.

    I feel this is just a beginning and we are going see quite interesting in future. More transistors also means possibly that Intel will increase internal graphics levels.
  • name99 - Tuesday, September 19, 2017 - link

    Seriously? The best they can show is a wafer?
    You don't think TSMC has 7nm wafers RIGHT NOW? The competition is not TMC 10nm today, it is whatever is around when Intel finally ships 10nm.

    Right now it looks like Apple will be shipping the A11X on 7nm in April or May or so next year. Probably one of the top tier ARM vendors (QC? Huawei?) will also have one of their SoCs on TSMC 7nm at the same sort of time.

    Meanwhile, let's look at Intel's record.
    2011 announcement of Fab 42 to handle 14nm (which was then cancelled/postponed) [But don't worry, this time, for sure, Fab 42 is going to open one day, now equipped for 7nm ...]
    Q3 2013 Intel demos a few 14nm Broadwells. Promises they'll be available Q4.
    They were --- just Q4 2014...

    Compare this round where we don't even have the demo chips working, just a wafer, and there isn't enough confidence to even suggest a shipping date.

    This is not boasting from a position of strength. This is a desperate attempt to fool the rubes with a variety of smoke and mirrors (test wafers vs test chips vs shipping dates; comparing Intel in [two years?] with TSMC today rather than in two years; etc)
  • Notmyusualid - Tuesday, September 19, 2017 - link

    Intel, typically tell the truth, regarding their lithography. If they say its 10nm, it is.

    I'm not sure what else you expect them to show you (and the competition)?

    Maybe write them, ask for a tour of the 10nm fab, and let me know how you get on, and if you understood a dam thing.

    Back in my Uni days, we used 'Chipwise', to design our own circuits. It was without doubt the most interesting part of my electronics degree.

    Seeing some of the final-year projects blew me away. They had them blown-up to A1 posters on the wall. Lots of repetition in there, but that's required, and still impressive. 1um? I can't even recall now.

    I cannot even hold a dream as to what miracles Intel must be performing in the back room, some 20 years later. But I guess the same can be said for any Fab/foundry today.
  • name99 - Tuesday, September 19, 2017 - link

    Did you read my comment?

    The point is not Intel's claims regarding their 10nm process, it is
    (a) their timetable
    (b) the ridiculousness of comparing a process they will ship in (one? two?) years against a process that TSMC is shipping today. Heck, it is perfectly possible that TSMC will be shipping their 7nm (which Intel is happy to admit is comparable to Intel's 10nm) before Intel ships 10nm, either at all or in volume.
  • FreckledTrout - Tuesday, September 19, 2017 - link

    It's really hard to compare your product to others future products, almost nobody does this because you really don't, as a competitor have enough info to make any legitimate claims. What do you want intel to say, on paper ours looks better/worse than TSMC's 7nn? You have an odd stance on this topic.
  • name99 - Tuesday, September 19, 2017 - link

    Like I said --- fool the rubes...
  • The_Assimilator - Tuesday, September 19, 2017 - link

    Intel's first 10nm production-quality wafer is of ARM chips? Oh the irony...
  • vladx - Tuesday, September 19, 2017 - link

    It would be great if Intel opens their fabs to big players like Huawei or Qualcomm, I'd love to have my Honor 9 phone replaced with a next gen Kirin-powered Honor manufactured by Intel.
  • MrSpadge - Wednesday, September 20, 2017 - link

    It is interesting, for sure. A move to push their foundry business, I suppose. And it's definitely not their 1st wafer, just the 1st publicly shown one. You start with far simpler test structures, DRAM etc. And in the added information they also claim to show a Cannon Lake wafer.
  • MrSpadge - Wednesday, September 20, 2017 - link

    @Ian: that Cannon Lake wafer looks far too blue to be unprocessed Si. It should be greyish, even at an angle and with somewhat coloured illumination - unless the room was lighted purely blue.
  • MrSpadge - Wednesday, September 20, 2017 - link

    (in which case the guy's shirt would also look blue)

Log in

Don't have an account? Sign up now