Star Swarm & The Test

For today’s DirectX 12 preview, Microsoft and Oxide Games have supplied us with a newer version of Oxide’s Star Swarm demo. Originally released in early 2014 as a demonstration of Oxide’s Nitrous engine and the capabilities of Mantle, Star Swarm is a massive space combat demo that is designed to push the limits of high-level APIs and demonstrate the performance advantages of low-level APIs. Due to its use of thousands of units and other effects that generate a high number of draw calls, Star Swarm can push over 100K draw calls, a massive workload that causes high-level APIs to simply crumple.

Because Star Swarm generates so many draw calls, it is essentially a best-case scenario test for low-level APIs, exploiting the fact that high-level APIs can’t effectively spread out the draw call workload over several CPU threads. As a result the performance gains from DirectX 12 in Star Swarm are going to be much greater than most (if not all) video games, but none the less it’s an effective tool to demonstrate the performance capabilities of DirectX 12 and to showcase how it is capable of better distributing work over multiple CPU threads.

It should be noted that while Star Swarm itself is a synthetic benchmark, the underlying Nitrous engine is relevant and is being used in multiple upcoming games. Stardock is using the Nitrous engine for their forthcoming Star Control game, and Oxide is using the engine for their own game, set to be announced at GDC 2015. So although Star Swarm is still a best case scenario, many of its lessons will be applicable to these future games.

As for the benchmark itself, we should also note that Star Swarm is a non-deterministic simulation. The benchmark is based on having two AI fleets fight each other, and as a result the outcome can differ from run to run. The good news is that although it’s not a deterministic benchmark, the benchmark’s RTS mode is reliable enough to keep the run-to-run variation low enough to produce reasonably consistent results. Among individual runs we’ll still see some fluctuations, while the benchmark will reliably demonstrate larger performance trends.


Star Swarm RTS Mode

The Test

For today’s preview Microsoft, NVIDIA, and AMD have provided us with the necessary WDDM 2.0 drivers to enable DirectX 12 under Windows 10. The NVIDIA driver is 349.56 and the AMD driver is 15.200. At this time we do not know when these early WDDM 2.0 drivers will be released to the public, though we would be surprised not to see them released by the time of GDC in early March.

In terms of bugs and other known issues, Microsoft has informed us that there are some known memory and performance regressions in the current WDDM 2.0 path that have since been fixed in interim builds of Windows. In particular the WDDM 2.0 path may see slightly lower performance than the WDDM 1.3 path for older drivers, and there is an issue with memory exhaustion. For this reason Microsoft has suggested that a 3GB card is required to use the Star Swarm DirectX 12 binary, although in our tests we have been able to run it on 2GB cards seemingly without issue. Meanwhile DirectX 11 deferred context support is currently broken in the combination of Star Swarm and NVIDIA's drivers, causing Star Swarm to immediately crash, so these results are with D3D 11 deferred contexts disabled.

For today’s article we are looking at a small range of cards from both AMD and NVIDIA to showcase both performance and compatibility. For NVIDIA we are looking at the GTX 980 (Maxwell 2), GTX 750 Ti (Maxwell 1), and GTX 680 (Kepler). For AMD we are looking at the R9 290X (GCN 1.1), R9 285 (GCN 1.2), and R9 260X (GCN 1.1). As we mentioned earlier support for Fermi and GCN 1.0 cards will be forthcoming in future drivers.

Meanwhile on the CPU front, to showcase the performance scaling of Direct3D we are running the bulk of our tests on our GPU testbed with 3 different settings to roughly emulate high-end Core i7 (6 cores), i5 (4 cores), and i3 (2 cores) processors. Unfortunately we cannot control for our 4960X’s L3 cache size, however that should not be a significant factor in these benchmarks.

DirectX 12 Preview CPU Configurations (i7-4960X)
Configuration Emulating
6C/12T @ 4.2GHz Overclocked Core i7
4C/4T @ 3.8GHz Core i5-4670K
2C/4T @ 3.8GHz Core i3-4370

Though not included in this preview, AMD’s recent APUs should slot between the 2 and 4 core options thanks to the design of AMD’s CPU modules.

CPU: Intel Core i7-4960X @ 4.2GHz
Motherboard: ASRock Fatal1ty X79 Professional
Power Supply: Corsair AX1200i
Hard Disk: Samsung SSD 840 EVO (750GB)
Memory: G.Skill RipjawZ DDR3-1866 4 x 8GB (9-10-9-26)
Case: NZXT Phantom 630 Windowed Edition
Monitor: Asus PQ321
Video Cards: AMD Radeon R9 290X
AMD Radeon R9 285
AMD Radeon R7 260X
NVIDIA GeForce GTX 980
NVIDIA GeForce GTX 750 Ti
NVIDIA GeForce GTX 680
Video Drivers: NVIDIA Release 349.56 Beta
AMD Catalyst 15.200 Beta
OS: Windows 10 Technical Preview 2 (Build 9926)

Finally, while we’re going to take a systematic look at DirectX 12 from both a CPU standpoint and a GPU standpoint, we may as well answer the first question on everyone’s mind: does DirectX 12 work as advertised? The short answer: a resounding yes.

Star Swarm GPU Scaling - Extreme Quality (4 Cores)

The Current State of DirectX 12 & WDDM 2.0 CPU Scaling
Comments Locked

245 Comments

View All Comments

  • B3an - Saturday, February 7, 2015 - link

    Thanks for posting this. This is the kind of thing i come to AT for.
  • Notmyusualid - Sunday, February 8, 2015 - link

    And me.
  • dragosmp - Saturday, February 7, 2015 - link

    Looking at the Dx11 vs Dx12 per core load it looks like the per-core performance is the limiting factor in Dx11, not the number of cores. As such, CPUs like the AMD's AM1 & FM2 platforms with low per-core performance would benefit from Dx12 more than Intel's CPUs that already have high IPC. (It may be that even the FX may become decent gaming CPUs with their 8 integer cores and not limited by 1-core turbo)
  • guskline - Saturday, February 7, 2015 - link

    Thank you for a great article Ryan.
  • okp247 - Saturday, February 7, 2015 - link

    Cheers for the article, Ryan. Very interesting subject and a good read.

    There seems to be issues with the AMD cards though, especially under DX11. Other testers report FPS @ mid 20's to early 30's in 1080P extreme settings even with the old 7970 under Win7/DX11.
    The power consumption is also quite low. 241 watts for 290X with 6-core i7, when Crysis 3 pulls 375 also with 6-core i7 in your original review of the card. The card seems positively starved ;-)

    This could be the OS, the graphics API or the game. Possibly all three. Whatever it is, it looks like a big issue that's undermining your test.

    On a completely different note: maybe you could get the developer to spill the beans about their work with both APIs? (Mantle and DX12). I think that would also be a very interesting read.
  • OrphanageExplosion - Saturday, February 7, 2015 - link

    Yup, this is the big takeaway from this article - http://images.anandtech.com/graphs/graph8962/71451...

    AMD seems to have big issues with CPU load on DX11 - the gulf between NVIDIA and AMD is colossal. Probably not an issue when all reviews use i7s to test GPUs, but think of the more budget orientated gamer with his i3 or Athlon X4. This is the area where reviews will say that AMD dominates, but NOT if the CPU can't run the GPU effectively.
  • ColdSnowden - Saturday, February 7, 2015 - link

    This reflects what I said above. AMD radeons have a much slower batch submission time. Does that mean that using an Nvidia card with a faster batch submission time can lessen cpu bottlenecking, so perhaps Guild Wars Two would run better with an nvidia GPU as my FX 4170 would be less likely to bottleneck it.
  • ObscureAngel - Saturday, February 7, 2015 - link

    Basically AMD now requires much better CPU than nvidia to render the same "drawcalls"
    I benchmarked my self recently my Phenom II X4 945 OC 3.7GHZ with my HD 7850 vs GTX 770.

    Obviously GTX 770 outperform my HD 7850.
    Altough i benchmarked star swarm and games where my GPU usage was very below 90% which means i was bottlenecked by the CPU.

    Guess what:
    Star swarm: AMD DX11: 7fps, Nvidia DX11: 17fps AMD Mantle: 24fps.

    I tested Saints Row IV where i get all the time bottleneck with my AMD card where i get all the time more frames more close to 30 than 60, and with GTX 770 i get more 60 than 30.

    Even NFS Rivals i have drops on GPU usage to 50% in some locations and that causes drops to 24fps.
    With nvidia again, i have 30 rocking stable, unlocking the framerate i have 60 most of the time and where i drop to 24fps due to my CPU with nvidia i have 48fps.

    Its not a good example since the GTX 770 is far more powerfull, but you have more proofs with weaker nvidia GPUS in low end cpus really improve the performance comparing to AMD cards that seems to require more power.

    I try to contact AMD but i nobody replied ever, i even register in GURU3D since there is a guy that works on AMD, and he never replied, same goes to many persons there are just fanboys and attacked me instead of trying to make pressure for AMD to fix this.

    I'm serious worried with that problem, cause my CPU is old and weak, and the extra frames that nvidia offers in DX11 is really big.
    Dispite the fact that DX12 is very close to release, i am pretty sure that many games will continue to be released in DX11, and the number of games with mantle it just fit in my hand.
    So i am thinking in selling my HD 7850 and buy the next 950ti just because of that, its far more economic than buy a new CPU and motherboard.
    I already know this problem for more than 6 months, tried to convince everybody and trying to contact amd, but i am alway attacked by fanboys or get ignored by AMD..
    So if AMD reply to me, maybe they dont like my money.

    Altough nothing is free, the DX11 optimizations on Nvidia makes eat more Vram and in some games like dying light and ryse i notice more stuttering and sometimes more time to load textures..
    Same goes if you use mantle, it eats more vram too.
    I expect that DX12 will need more Vram too.

    If 2gb is getting short lately prepare that will get shorter if it will ear more vram as Nvidia DX11 and AMD Mantle.

    Regards.
  • okp247 - Saturday, February 7, 2015 - link

    I think the nVidia cards are actually being gimped as well. On Win7/DX11 people are reporting 70-80 FPS @ extreme settings, 1080 with the two top 900-cards on everything from old i5's to FX's.
    They are just not being hurt as much as AMDs, maybe because of more mature drivers and/or different architecture.
  • Ryan Smith - Saturday, February 7, 2015 - link

    Please note that we're using the RTS demo. If you're getting scores that high, you're probably using the Follow demo, which is entirely different from run-to-run.

Log in

Don't have an account? Sign up now