Tyan's GT75-BP012

Getting to the meat of today's article, we have the Tyan GT75-BP012. Anton has already described the Tyan GT75 servers in great detail here, so we will recap and add a few details.

The Tyan GT75 machines (just like the Tyan TN71-BP012 servers launched a year ago) are based on one IBM POWER8 Turismo Single Chip Module (SCM) processor, offering either eight or ten cores. This CPU finds itself paired with Tyan's Habanero motherboard, the same as in IBM's most affordable OpenPOWER server, the S812LC.

The board has 32 DIMM slots using four IBM Centaur memory buffer chips (MBCs). Since the operational voltage of the Centaur chip PHY maxes out at 1.43V, only low power DDR3 DIMMs are supported. The largest supported DIMMs are the quad ranked 32 GB DIMMs with 4 Gbit chips, allowing the server to have up to 1 TB of RAM. Unfortunately, the latest 8 Gbit based DIMMs are not supported. Tyan ships the server with eight 16 GB DIMMs – for a total of 128GB – if you take the standard configuration.

Tyan GT75: IBM POWER8 Turismo CPU Options
  POWER8 8-Core POWER8 10-Core
Core Count 8 10
Threads 64 80
Nominal Freq.
2.33 GHz
3.025 GHz
2.095 GHz
2.926 GHz
L2 Cache 512 KB per core 512 KB per core
L3 Cache 8 MB eDRAM per core
64 MB per CPU
8 MB eDRAM per core
80 MB per CPU
DRAM Interface DDR3L-1600 (Low Power Only)
PCI Express 3 × PCIe controllers, 32 lanes
TDP 130W

As the OpenPOWER POWER8 has to fit and operate within a 1U home, the clockspeed is limited to 2.328 GHz nominal. However, that is just a paper spec just like the clockspeed of the Xeon E5. In reality, the power governor defaults to "on demand". In that case, the CPU runs at 2.06 GHz at low load, and boost up to 3.025 GHz when the CPU is fully loaded. The speedsteps are very small, only +/- 30 MHz, so the second highest speedstep is 2.99 GHz. Below you find the configuration table of all Tyan GT75 servers.

Comparison of Tyan GT75 Servers
  BSP012G75V4H-B4C BSP012G75V4H-Q4T BSP012G75V4H-Q4F
2.328 GHz
130 W/169 W TDP
2.095 GHz
130 W/169 W TDP
2.095 GHz
130 W/169 W TDP
Installed RAM 8 × 16 GB R-DDR3L 16 × 16 GB R-DDR3L 32 × 16 GB R-DDR3L
RAM (subsystem) Up to 1 TB of DDR3L-1333 DRAM, 32 RDIMM modules, four IBM Centaur MBCs
Storage 2 × 512 GB SSDs 2 × 1 TB SSDs 4 × 1 TB SSDs
Tyan Storage Mezzanine MP012-9235-4I
(4-port SATA 6Gb/s IOC w/o RAID stack)
LAN 4 × GbE ports 4 × 10 GbE ports 4 × 10 GbE ports
Tyan LAN Mezzanine MP012-5719-4C
Broadcom 1GbE LAN Mezz Card
Qlogic+Broadcom 10GbE LAN Mezz Card-
MP012-Q840-4F Qlogic 10GbE LAN Mezz Card

In today's article we're review the basic model, the BSP012G75V4H-B4C. Notice the twelve (!) fans.

The Tyan GT75-BP012 makes use of Tyan's mezzanine cards for networking and for the storage controller. As a result, it can equipped with up to four 3.5” hot-swappable SATA 6G HDD/SSDs and four network controllers (1 GbE or 10 GbE) without using the 8-lane PCIe riser.

Now if you've been counting the CPIe lanes required for all of this, it seems like we should be a bit short, and indeed that's the case. Digging a bit deeper, we'll find that the server is using a PLX PEX8748 PCIe switch to take a PCIe 3.0 x8 root port from the CPU and switch it among the LAN riser, SATA riser, and the two black PCIe x8 slots.

The OpenPOWER Saga Continues: Can You Get POWER Inside 1U? Benchmark Configuration and Methodology
Comments Locked


View All Comments

  • Einy0 - Friday, February 24, 2017 - link

    Articles like these make me wonder if some of these companies using IBM eServer iSeries(AS/400) as mid-level servers are wasting their money. I was always under the impression that Power was suppose to be tuned for database heavy workloads and hence have a massive advantage in doing so. I know the iSeries servers run an OS with DB2 built-in and tuned specifically for it but how much of an advantage does that really equate to?
  • FunBunny2 - Friday, February 24, 2017 - link

    -- I know the iSeries servers run an OS with DB2 built-in and tuned specifically for it but how much of an advantage does that really equate to?

    unless IBM has done a complete port recently, AS/400 "integrated database" was built before server versions of DB2 existed. it's/was just a retronym.
  • kfishy - Friday, February 24, 2017 - link

    As ISAs becoming more and more relevant in the post-Moore's law world, where you can't solve a computational problem just by throwing ever more transistors at it, I wonder if this opens up opportunity for POWER to carve out niches left out by Intel's more fixed and general purpose approach.

    At the same time, POWER will have to contend with a nascent but rising and truly open ISA in RISC-V, where companies can simply implement the subsets of the ISA that they need. The next decade in processor architecture is going to be interesting to watch.
  • FunBunny2 - Friday, February 24, 2017 - link

    -- As ISAs becoming more and more relevant in the post-Moore's law world, where you can't solve a computational problem just by throwing ever more transistors at it

    given that ISA has been reduced to z, ARM, and X86 not counting Power, of course. and ARM might not really qualify as equivalent. for those ancient enough, or well read enough, know that up to and during the "IBM and 7 Dwarves" era, ISA and even base architecture, made a varied ecosystem. not so much anymore. and I doubt anyone will invent a more efficient adder or multiplier or any other subunit of the real CPU. just look at the screen shots of chips over the last couple of decades: the real CPU area of a chip is nearly disappeared. in fact, much (if not most) of the transistor budget for some years has been used for caching, not ISA in hardware. so called micro-architecture is just a RISC CPU, and the rest of the chip is those caches and ever more complicated "decoder". that and integrating what had previously been other-chip functions. IOW, approaching monopoly control of compute.

    I expect the next decade to be more of the same: more cache and more off-chip function brought on chip. actual CPU ISA, not so much.
  • aryonoco - Saturday, February 25, 2017 - link

    Thank you Johan. Great article.

    Not all AnandTech articles live up to the standards set in the days past, but your articles continue your own excellent standards.

    Very much looking forward to POWER 9 chips. Hopefully they have also done the work to port the toolchain and important software already to it this time and we won't have to wait another 12 months after release to be able to compile normal Linux programs on it.

    Also, 12 fans running at 15,000 rpm in a 1U? What did that sound like?! Wow!
  • JohanAnandtech - Sunday, February 26, 2017 - link

    Thx Aryonoco. Not all of those 12 fans were running at top speed, but imagine a Jumbo jet taking off sound. It clearly show how hard it is to cool IBM's best in a 1U: you have to limit the clockspeed to about 2/3 of what it is capable off and double the number of fans.
  • yuhong - Wednesday, March 1, 2017 - link

    "Unfortunately, the latest 8 Gbit based DIMMs are not supported."
    Micron don't make these chips anymore:
    Interestingly, Crucial is selling 32GB DDR3 quad rank RDIMMs again (but not LR-DIMMs):
  • mystic-pokemon - Sunday, March 5, 2017 - link

    For folks who are saying that POWER only looks good on paper. NOT true.

    I know shit ton of stuff about one of the server Johan listed above. He has a point when he says Power consumption is only so much important.
    In short, when you combine all aspects to TCO model: POWER8 server delivers most optimal TCO value
    We consider all the following into our TCO model
    a) Cost of ownership of the server
    b) Warranty (Lesser than conventional server, different model of operations)
    c) What it delivers (How many independent threads (SMT8 on POWER8 remember ? 192 hardware threads), how much Memory Bandwidth (230 GBPs), how much total memory capacity in 1 server ( 1 TB with 32 GB)
    d) For a public cloud use-case, how many VMs (with x HW threads and x memory cap / bw ) can you deliver on 1 POWER8 server compared to other servers in fleet today ? Based on above stats, a lot .
    e) Data center floor lease cost in DC ( 24 of these servers in 1 Rack, much denser. Average the lease over age of server: 3 years ). This includes all DC services like aggers, connectivity and such.
    f) Cost per KWH in the specific DC ( 1 Rack has nominal power 750W)

    All this combined POWER has good TCO. Its a massively parallel server, what where major advantage comes from. Choose your workload wisely. That's why companies continue to work on it.

    I am talking about all this without actually combining with CAPI over PCIe and openCAPI. With POWER9 all this is getting even better. Get it ? POWER is going no where.

Log in

Don't have an account? Sign up now