It is high praise when someone like Jim Keller says that your company ‘has made impressive progress, and has the most promising architecture out there’. That praise means twice as much if Keller actually joins the company. Today Tenstorrent is announcing that Jim Keller, compute architect extraordinaire, has joined the company as its Chief Technology Officer, President, and joins the company board.

To our regular audience, Jim Keller is a known expert in all things computer architecture. His history starts at DEC, designing Alpha processors, before moving to a first stint at AMD for two years working to launch K7 and K8. Keller spent four years as Chief Architect at SiByte/Broadcom designing MIPS for network interfaces, four years at P.A. Semi, four years at Apple (A4+A5), then back to AMD for three years two years as Corporate VP and Chief Cores Architect in charge of the new generation of CPU architectures, K12 and Zen. This was then followed with two years at Tesla as VP of Autopilot Hardware Engineering creating the Full Self-Driving chip, then two years as Intel’s Senior VP of the Silicon Engineering Group, before leaving in June 2020. Since his departure from Intel, a number of key industry analysts (and ourselves) have been guessing where Jim would land. He briefly appeared in the audience of Elon Musk’s Neuralink presentation in August 2020, alongside Lex Fridman.

Jim Keller: Work Experience
AnandTech Company Title Important
Product
1980s 1998 DEC Architect Alpha
1998 1999 AMD Lead Architect K7, K8
1999 2000 SiByte Chief Architect MIPS Networking
2000 2004 Broadcom Chief Architect MIPS Networking
2004 2008 P.A. Semi VP Engineering Low Power Mobile
2008 2012 Apple VP Engineering A4 / A5 Mobile
8/2012 9/2015 AMD Corp VP and
Chief Cores Architect
Skybridge / K12
(+ Zen)
1/2016 4/2018 Tesla VP Autopilot
Hardware Engineering
Fully Self-Driving
(FSD) Chip
4/2018 6/2020 Intel Senior VP
Silicon Engineering
?
2021 Tenstorrent President and CTO TBD

Today Tenstorrent reached out to inform us that Jim Keller has taken the position of President and Chief Technology Officer of the company, as well as being a member of its Board of Directors. Jim's role, based on his previous expertise, would appear to be in the design of future products for the company as well as building on the team at Tenstorrent to succeed in that goal.

CEO Ljubisa Bajic confirmed Jim’s appointment as President and CTO of the company, stating that:

Tenstorrent was founded on the belief that the ongoing shift towards ML-centric software necessitates a corresponding transformation in computational capabilities. There is nobody more capable of executing this vision than Jim Keller, a leader who is equally great at designing computers, cultures, and organizations. I am thrilled to be working with Jim and beyond excited about the possibilities our partnership unlocks.

Tenstorrent is a pure-play fab-less AI chip design and software company, which means that they create and design silicon for machine learning, then use a foundry to make the hardware, then work with partners to create solutions (as in, chips + system + software + optimizations for that customer). For those that know this space, this makes the company sound like any of the other 50 companies out in the market that seem to be doing the same thing. The typical split with pure-play fabless AI chip design companies is whether they are focused on training or inference: Tenstorrent does both, and is already in the process of finalizing its third generation processor.

Founded in 2016, Tenstorrent has around 70 employees between Toronto and Austin. The critical members of the company all have backgrounds in silicon design: the CEO led power and performance architecture at AMD as well as system architecture for Tegra at NVIDIA, the head of system software spent 16 years across AMD and Altera, and there’s expertise from neural network accelerator design from Intel, GPU systems engineering at AMD, Arm CPU verification leads, IO virtualization expertise at AMD, Intel’s former neural network compiler team lead, as well as AMD’s former security and network development lead. It sounds like Jim will fit right in, as well as have a few former colleagues working alongside him.

Tenstorrent’s current generation product is Grayskull, a ~620mm2 processor built on GF’s 12nm that was initially designed as an inference accelerator and host. It contains 120 custom cores in a 2D bidirectional mesh, and offers 368 TeraOPs of 8-bit compute for only 65 W. Each of the 120 custom cores has a packet management engine for data control, a packet compute engine that contains Tenstorrent’s custom TENSIX cores, and five RISC cores for non-standard operations, such as conditionals. The chip focuses on sparse tensor operations by optimizing matrix operations into compressed packets, enabling pipeline parallelization of the compute steps both through the graph compiler and the packet manager. This also enables dynamic graph execution, and compared to some other AI chip models, allows both compute and data transfer asynchronously, rather than specific compute/transfer time domains.

Grayskull is currently shipping to Tenstorrent’s customers, all of which are still undisclosed.

The next generation chip, known as Wormhole, is more focused on training than acceleration, and also bundles in a 16x100G Ethernet port switch. The move from training to acceleration necessitates a faster memory interface, and so there are six channels of GDDR6, rather than 8 channels of LPDDR4. This might seem low compared to other AI chips discussing HBM integration, however Tenstorrent’s plan here seems to be more aligned for more mid-range cost structure, but also offering machine learning compute at a better rate of efficiency than those chips pushing the bleeding edge of frequency and process node (part of this will be in yields as well).

So where exactly does Keller fit in if the current generation is already selling, and the next generation is almost ready to go? In speaking to the CEO, I confirmed that Keller ‘will be building new and interesting stuff with us’. This seems to suggest that the vision with Keller’s involvement is going to be on 2022/2023 hardware in mind, following Tenstorrent’s overriding Software 2.0 strategy that the hardware, compiler, and run-time offer a full-stack approach to sparse (and dense) AI matrix calculations. In Jim’s own words:

Software 2.0 is the largest opportunity for computing innovation in a long time. Victory requires a comprehensive re-thinking of compute and low level software. Tenstorrent has made impressive progress, and with the most promising architecture out there, we are poised to become a next gen computing giant.

Jim Keller officially started last Wednesday, and the official wire announcement is set for 1/6, but we've been allowed to share in advance. Our request for an interview with Jim has been noted and filed, potentially for a few months down the line as the company has some more details on its platform and roadmap (I’ve also asked for an up-to-date headshot of Jim!). For those interested, I interviewed Jim back in July 2018, just after he started at Intel – you can read that interview here.

Related Reading

POST A COMMENT

66 Comments

View All Comments

  • linuxgeex - Wednesday, January 6, 2021 - link

    Don't count your acquisitions before they are cooked. There's still hurdles to be leapt, and the official timeline if there are no surprises, is to close the deal March 2022. NVidia isn't going to be extorting implementors. That would land them in antitrust court. Reply
  • whatthe123 - Wednesday, January 6, 2021 - link

    I think he means they don't want to indirectly help nvidia by giving them business. It would be suicide for nvidia to deny or upcharge AMD. Reply
  • mode_13h - Friday, January 8, 2021 - link

    It was mostly an idle comment, but I think AMD should be more strategically focused on two things:

    1. Where they have the most competitive edge. Here, it seems like ARM has established a formidable challenge for AMD. By the time AMD could launch an ARM-based CPU, it would be going up against competitors with V1 and N2 cores, if not even newer iterations. Even achieving *parity* with such CPUs would not be a foregone conclusion. As a relative latecomer to that market, AMD can't afford to enter with a weaker offering.

    2. What type of server ecosystem they want to help foster. Lending more credibility to the ARM server movement helps Nvidia, while damaging the x86 server market position. And it's that credibility that's a lot more valuable to Nvidia than any short-term ISA licensing royalties.

    Also, we were already seeing a movement towards RISC V by Europeans, Chinese, and others who were skeptical about ARM's long-term openness and availability. With it now sitting in US hands - and Nvidia's, in particular - there's going to be a baseline of demand for non-ARM CPUs, without regard for any potential performance or efficiency differentials.

    So, AMD needs to ask itself some serious strategic questions. However, it makes sense for them to keep pushing their x86 CPUs until they come under serious threat from Intel or ARM-based CPUs. Shifting too early could shake confidence among customers about AMD's continued commitment to x86.
    Reply
  • edzieba - Friday, January 8, 2021 - link

    AMD use small ARM cores for various other tasks (e.g. the PSP within all Zen chips), so will still be using and paying for that ARM architectural license regardless of whether K12 ever ships. Reply
  • mode_13h - Saturday, January 9, 2021 - link

    Aren't those just off-the shelf 32-bit ARM-designed cores? Why would AMD need an architectural license for that? They would be paying royalties on them, but I'm sure those are a lot cheaper than their current performance-oriented cores. Reply
  • arashi - Wednesday, January 6, 2021 - link

    AMD has an architectural license like Apple, and it's a one off payment per arch if memory serves. Reply
  • wumpus - Wednesday, January 6, 2021 - link

    The existence of a high performance commodity ARM server chip would significantly undermine AMD's most valuable asset, their AMD64 (sometimes called x86) ISA.

    I wouldn't be surprised if it mostly existed as a threat to keep Intel from extinguishing AMD during those dark days before Ryzen. As EPYC is breaking into the server room, there is even less reason to allow it to sneak out of the lab now.
    Reply
  • Yojimbo - Wednesday, January 6, 2021 - link

    Wasn't the idea with K12 and Zen that they basically shared a common architecture but targeted two different ISAs? Reply
  • Ian Cutress - Thursday, January 7, 2021 - link

    A common platform, not a common architecture. So the interfaces on the motherboards were identical, for IO, power, and DDR. Reply
  • ViRGE - Tuesday, January 5, 2021 - link

    Jawbridge? Grayskull?

    Has anyone checked to see if Keller is on top of the building wielding a large sword and yelling "I have the power!"
    Reply

Log in

Don't have an account? Sign up now