Fable Legends Early Preview: DirectX 12 Benchmark Analysis
by Ryan Smith, Ian Cutress & Daniel Williams on September 24, 2015 9:00 AM ESTUpdate 2016/03/07: Well so much for that. Fable Legends has been canceled. So it will ultimately be another game that gets to claim the right as the first Unreal Engine 4 based DX12 game.
DirectX 12 is now out in the wild as a part of Windows 10 and the updated driver model WDDM 2.0 that comes with it. Unlike DX11, there are no major gaming titles at launch - we are now waiting for games to take advantage of DX12 and what difference it will make for the game playing experience. One of the main focal points of DX12 is draw calls, leveraging multiple processor cores to dispatch GPU workloads, rather than the previous model of a single core doing most of the work. DX12 brings about a lot of changes with the goal of increasing performance, offering an even more immersive experience, but it does shift some of the support requirements to the engine developers such as SLI or Crossfire. We tackled two synthetic tests earlier this year, Star Swarm and 3DMark, but due to timing and other industry events, we are waiting for a better time to test the Ashes of the Singularity benchmark as the game nears completion. Until that point, a PR team got in contact with us regarding the upcoming Fable Legends title using the Unreal 4 engine, and an early access preview benchmark that came with it. Here are our results so far.
Fable Legends
Fable Legends is an Xbox One/Windows 10 exclusive free to play title built by Lionhead Studios in Unreal Engine 4. The game, styled as a ‘cooperative action RPG’, consists of asymmetrical multiplayer matches with attackers trying to raid a base and the defender playing more of a tower defense position.
The benchmark provided is more of a graphics showpiece than a representation of the gameplay, in order to show off the capabilities of the engine and the DX12 implementation. Unfortunately we didn't get to see any gameplay in this benchmark as a result, which would seem to focus more on combat. This is the one of the first DirectX 12 benchmarks available - Ashes of the Singularity by Stardock was released just before IDF, but due to scheduling we have not had a chance to dig into that one yet. This will be our first look at a DirectX 12 game engine with a game attached as a result.
Official Trailer
This benchmark pans through several outdoor scenes in a fashion similar to the Unigene Valley benchmark, focusing more on landscapes, distance drawing and tessellation rather than an upfront first-person perspective. Graphical effects such as dynamic global illumination are computed on the fly, making subtle differences in the lighting and it shows the day/night cycle being accelerated, similar to the large Grand Theft Auto benchmark. The engine itself draws on DX12 explicit features such as ‘asynchronous compute, manual resource barrier tracking, and explicit memory management’ that either allow the application to better take advantage of available hardware or open up options that allow developers to better manage multi-threaded applications and GPU memory resources respectively. The updated engine has had several additions to implement these visual effects and has promised that use of DirectX 12 will help to improve both the experience and performance.
The Test
The software provided to us is a prerelease version of Fable Legends, with early drivers, so ultimately the performance at this point is most likely not representative of the game at launch and should improve before release. What we will see here is more of a broad picture painting how different GPUs will scale when DX12 features are thrown into the mix. In fact, AMD sent us a note that there is a new driver available specifically for this benchmark which should improve the scores on the Fury X, although it arrived too late for this pre-release look at Fable Legends (Ryan did the testing but is covering Samsung’s 950 Pro launch in Korea at this time). It can underscore just how early in the game and driver development cycle DirectX 12 is for all players. But as with most important titles, we expect drivers and software updates to continue to drive performance forward as developers and engineers come to understand how the new version of DirectX works.
With that being said, there does not appear to be any stability issues with the benchmark as it stands, and we have had time to test graphics cards going back a few generations for both AMD and NVIDIA. Our pre-release package came with three test standards at 1280x720, 1920x1080 and 4K. We also attempted to test a number of these combinations multiple CPU core and thread count simulations in order to emulate a number of popular CPUs in the market.
CPU: | Intel Core i7-4960X in 3 modes: 'Core i7' - 6 Cores, 12 Threads at 4.2 GHz 'Core i5' - 4 Cores, 4 Threads at 3.8 GHz 'Core i3' - 2 Cores, 4 Threads at 3.8 GHz |
Motherboard: | ASRock Fatal1ty X79 Professional |
Power Supply: | Corsair AX1200i |
Hard Disk: | Samsung SSD 840 EVO (750GB) |
Memory: | G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26) |
Case: | NZXT Phantom 630 Windowed Edition |
Monitor: | Asus PQ321 |
Video Cards: | AMD Radeon R9 Fury X AMD Radeon R9 290X AMD Radeon R9 285 AMD Radeon HD 7970 NVIDIA GeForce GTX 980 Ti NVIDIA GeForce GTX 970 (EVGA) NVIDIA GeForce GTX 960 NVIDIA GeForce GTX 680 NVIDIA GeForce GTX 750 Ti |
Video Drivers: | NVIDIA Release 355.82 AMD Catalyst Cat 15.201.1102 |
OS: | Windows 10 |
This Test
All the results in this piece are on discrete GPUs. The benchmark outputs a score, which is merely the average frame rate multiplied by a hundred, but it also dumps an extensive data log where it tracks over 186 different elements of the system every frame, such as compute time for various effects for each frame. Our testing takes on three roles – direct GPU comparison of average frame rates at 1080p and 720p in our i7-4960X mode, CPU scaling at each resolution with the GTX 980 Ti and AMD Fury, X and then a deep analysis of the percentile data of these two graphics cards at each resolution and each CPU configuration.
141 Comments
View All Comments
piiman - Saturday, September 26, 2015 - link
"Yes, but when the goal is to show improvements in rendering performance"I'm completely confused with this "comparison"
How does this story even remotely show how will Dx12 works compared to Dx11? All they did was a Dx12 VIDEO card comparison? It tells us NOTHING in regard to how much faster Dx12 is compared to 11.
inighthawki - Saturday, September 26, 2015 - link
I guess what I mean is the purpose of a graphics benchmark is not to show real world game performance, it is to show the performance of the graphics API. This this case, the goal is trying to show that D3D12 works well. Throwing someone into a 64 player match of battlefield 4 to test a graphics benchmark defeats the purpose because you are introducing a bunch of overhead completely unrelated to graphics.figus77 - Monday, September 28, 2015 - link
You are wrong, many dx12 implementation will help on very chaotic situation with many pg and big use of IA, this benchmark is usefull like a 3dmark... just look at the images and say is a nice graphics (still Witcher3 in DX11 is far better for me)inighthawki - Tuesday, September 29, 2015 - link
I think you missed the point - I did not say it would not help, I just said that throwing on tons of extra overhead does not isolate the overhead improvements on the graphics runtime. You would get fairly unreliable results due to the massive variation caused by actual gameplay. When you do a benchmark of a specific thing - e.g. a graphics benchmark, which is what this is, then you want to perform as little non-graphics work as possible.mattevansc3 - Thursday, September 24, 2015 - link
Yes, the game built on AMD technology (Mantle) before being ported to DX12, sponsored by AMD, made in partnership with AMD and received development support from AMD is a more representative benchmark than a 3rd party game built on a hardware agnostic engine.YukaKun - Thursday, September 24, 2015 - link
Yeah, cause Unreal it's very neutral.Remember the "TWIMTBP" from 1999 to 2010 in every UE game? Don't think UE4 is a clean slate coding wise for AMD and nVidia. They will still favor nVidia re-using old code paths for them, so I'm pretty sure even if the guys developing Fable are neutral (or try to), UE underneath is not.
Cheers!
BillyONeal - Thursday, September 24, 2015 - link
That's because AMD's developer outreach was terrible at the time, not because Unreal did anything specific.Kutark - Monday, September 28, 2015 - link
Yes, but you have to remember, Nvidia is Satan, AMD is Jesus. Keep that in mind when you read comments like that and all will make senseStuka87 - Thursday, September 24, 2015 - link
nVidia is a primary sponsor of the Unreal Engine.RussianSensation - Thursday, September 24, 2015 - link
UE4 is not a brand agnostic engine. In fact, every benchmark you see on UE4 has GTX970 beating 290X.I have summarized the recent UE4 games where 970 beats 290X easily:
http://forums.anandtech.com/showpost.php?p=3772288...
In Fable Legends, a UE4 DX12 benchmark, a 925mhz HD7970 crushes the GTX960 by 32%, while an R9 290X beats GTX970 by 13%. Those are not normal results for UE4 games that have favoured NV's Maxwell architecture.
Furthermore, we are seeing AMD cards perform exceptionally well at lower resolutions, most likely because DX12 helped resolve their DX11 API draw-call bottleneck. This is a huge boon for GCN moving forward if more DX12 games come out.
Looking at other websites, a $280 R9 390 is on the heels of a $450 GTX980.
http://techreport.com/review/29090/fable-legends-d...
So really besides 980Ti (TechReport uses a heavily factory pre-overclocked Asus Strix 980TI that boosts to 1380mhz out of the box), the entire stack of NV's cards from $160-500 loses badly to GCN in terms of expected price/performance.
We should wait for the full game's release and give NV/AMD time to upgrade their drivers but thus far the performance in Ashes and Fable Legends is looking extremely strong for AMD's cards.