Edit: Read our full review here: http://www.anandtech.com/show/8781/

Regular readers of my twitter feed might have noticed that over the past 12/24 months, I lamented the lack of 10 gigabit Ethernet connectors on any motherboard. My particular gripe was the lack of 10GBase-T, the standard which can be easily integrated into the home. Despite my wishes, there are several main barriers to introducing this technology. Firstly is the cost, whereby a 10GBase-T Ethernet card costs $400-$800 depending on your location (using the Intel X520-T2), followed by the power consumption which requires either an active cooler or a passive plus good airflow to shift up to 14W. The bandwidth can be as important (PCIe 2.1 x8 for the X540-BT2, but can work in PCIe 3.0 x8 or x4 mode), but also it is limited to those who need faster internal networking routing. When all these factors are added together, it does not make for an easy addition to a motherboard. But step forward ASRock.

The concept of the X99 WS-E/10G is simple. This is a workstation class motherboard aimed at prosumers. This is where 10GBase-T makes most sense after all, at the users that have sufficient funds to purchase a minimum $800 Netgear 10GBase-T switch and measure their internal networking upgrades in terms of hundreds of dollars per port, rather than cents per port. The workstation motherboard is also designed to support server operating systems, and is low profile in the rear for fitting into 1U chassis, similar to other ASRock WS motherboards.

In order to deal with the heat from the Intel X540-BT2 chip being used, the extended XXL heatsink is connected to the top heatsink on board, with the final chipset heatsink using an active fan. This is because this chipset heatsink arrangement also has to cool two PLX 8747 chips which enable the x16/x16/x16/x16 functionality. If a prosumer has enough single slot cards, this can extend into x16/x8/x8/x8/x8/x8/x8 if needed. Extra PCIe power is provided via two molex ports above and below the PCIe connectors.

Aside from the X540-BT2 chip supplying dual 10GBase-T ports, ASRock has dual Intel I210-AT Ethernet ports also for a total of four. All four can be teamed with a suitable switch in play. The key point to note here despite ASRock’s video explaining the technology, and which sounds perfectly simple to anyone in networking, is that this does not increase your internet speed, only the internal home/office network speed.

The rest of the motherboard is filled with ten SATA 6 Gbps ports plus another two from a controller, with also SATA Express support and M.2 support. ASRock’s video suggests this is PCIe 2.0 x4, although their image lacks the Turbo M.2 x4 designation and the chipset would not have enough lanes, and as such it is probably M.2 x2 shared with the SATAe. Audio is provided by an improved Realtek ALC1150 codec solution, and in the middle of the board is a USB 2.0 Type-A slot sticking out of the motherboard, for dongles or easy OS installation out of the case. There are eight USB 3.0 ports on the board as well.

Like the X99 Extreme11, this motherboard is going to come in very expensive. Dual PLX 8747 chips and an Intel X540-BT2 chip on their own would put it past most X99 motherboards on the market.  To a certain extent we could consider the Extreme11 design, remove the LSI chip from it and add the X540-BT2, which still means it will probably be $200-$300 more than the Extreme11. Mark this one down at around $800-$900 as a rough guess, with an estimated release date in December.

Thinking out loud for a moment: 10GBase-T is being used here because it is a prosumer feature, and prosumers already want a lot of other features, hence the combination and high price overall. The moment 10G is added to a basic motherboard for example, using a H97/Z97 (and reduces the PCIe 3.0 x16 down to x8), a $100 board becomes $400+ and beyond the cost of any other Z97 motherboard. Ultimately if 10GBase-T were to become a mainstream feature, the chip needs to come down in price.


Comments Locked


View All Comments

  • alacard - Monday, November 24, 2014 - link

    I can't stand this argument. Hey Jeff just because you have no use for, or can't see why any would have a need for faster technology, doesn't mean no one else does. After all, 640K should be enough for anyone, am i right?
  • azazel1024 - Monday, November 24, 2014 - link

    Define no use? I use a 2x1GbE with Windows 8.1 and SMB Multichannel to get me 2GbE of true bandwidth. I am changing around my storage, so I am heavily storage bottle necked by my single 3TB 7200rpm drive, but in a couple of months I'll be a 2x3TB RAID0 in my server and desktop instead of the mishmash of 1x3TB and 2x1TB I have going on right now (it used to be 2x2 and 2x1, but one of the 2TB disks started getting a little flakey).

    Once I am done with the upgrade, I should be able to easily push 300MB/sec over a network pipe sufficient for it. I don't NEED to do that, but I certainly wouldn't mind being able to utilize it. SSD is dropping in price faster than HDDs, and seem to have been for awhile. Price for storage parity is no where close yet, but still, it MAY be someday. Or it might be that most people have no issues with SSD storage, even for "bulk things" like video collections.

    My 1.7TiB of data took ~3hrs to transfer over because of disk bound limitations. Once I build it out to a 2x3TB array, it should only be around 2hrs to transfer things to it (and I'll try setting up a 3rd GbE link using the onboard NICs in the machines and running a temp 100ft network cable, just to see if the onboard NICs will play well with the Intel Gigabit NICs I have in the machines...because network porn).

    I absolutely do not need 10GbE. I can however desire it and I can see really wanting it once my storage is even faster. I can grok a situation in a few years where my HDDs are getting long in the tooth where it might make sense to pay the resonable premium and just get SSDs for my bulk storage. THEN 10GbE would make a lot of sense to take advantage of the speed provided.

    Plus, 10GbE can provide advantages like running remote storage as local storage, especially with iSCSI, as the significantly reduced latency of 10GbE is really needed in some ways to not make that painful.
  • Pork@III - Monday, November 24, 2014 - link

    You is a new William Gates, with a new 640K urban legeng. You is right for itself in this moment. But you make a big mistake for a near future and that we all need when future coming to us.
  • Glock24 - Monday, November 24, 2014 - link

    Is Asrock any good as a workstation motherboard? Where I live you can only find Asrock, MSI or Intel, and Asrock products sold here are really crap.

    For my builds I always import all components and go with Gigabyte for the mobo.
  • Vatharian - Wednesday, November 26, 2014 - link

    Oh my gods, no. Gigabyte for server/workstation? Gods, no, please. I worked on Gigabyte mobo repair service for a couple of years, we had also some drop-ins of other brands. After this I'm genuinely shocked, that Gigabyte still even exists, so do MSI. All of their products, except MSI big bang fusion and 9) are complete and utter crap, from layout (chipset heatsink mounting screw blocking pci-express slot?) through electrical work (no galvanic separation between sound card and usb), to pcb layers coming apart, especially in those with more copper. MSI ofter has misplaced components (I resoldered thousands of mosfets and other small smds just by 1mm, and everything starts working), and absolutely no onboard current spike protection. ASRock was always making ingenious, but often low quality motherboards (caps! MOSFETs!), but everything else was often decent - they have skilled and brave engineers in-house. Recently, when they switched manufacturing, ASRock became surprisingly solid. I find their workstation and Extreme6+ boards astonishingly good quality, on par with ASUS (which dropped in quality recently).
  • mars2k - Monday, November 24, 2014 - link

    Need a fast San rig to run a virtualized environment. Ok cool ….already shopping 10Gbe for my home. Lots of decent new 10Gbe nics on EBay for cheaper than you think however these Intel will do if the price is right. Already like Asrock, kudos for the M .2 support along with PLX 8747 and tons of PCIe slots.
    LSI is great as well, you can find new re-branded LSI raid cards all over the place …way cheap. Why would I pay for an on-board LSI solution that doesn't support Raid 5? Cut the board price by a third (more please) and keep the LSI chips.
  • name99 - Monday, November 24, 2014 - link

    A partial (admittedly less than ideal) step in this direction is to provide multi-path TCP in the OS. In the common situations of interest one is running multiple of the same OS around the house/office, so both ends will support mTCP. One can then aggregate a gigE connection, whatever your WiFi offers, and one or two or three USB (and/or thunderbolt) ethernet adaptors.

    Yeah, this doesn't give us 10G; but it can easily and cheaply give us 2 or 3G, which is 2 or 3x faster than what we have today...

    [Note that this is aggregation at the TCP level. Aggregation at the ethernet level already exists, certainly in OSX, I assume in Linux, but it's finicky and requires special hardware. Working at the TCP level, mTCP should just aggregate automatically and cleanly over all the network ports you may have.]

    Which raises the question of: what's the status of mTCP?
    - supposedly we had a trial experimental version in iOS7 (which was only used for Siri)
    - I saw a few reports that it was part of the Yosemite betas, but it's not there as of 10.10.1
    - I believe it's part of Linux (but I've no idea what that means in terms of for which distros it is by default available and switched on)
    - Windows, as usual, seems late to the party; I've not even heard rumors about this being targeted as part of Win 10.

    Anyone have updates on any of these?
  • azazel1024 - Monday, November 24, 2014 - link

    I think you are late to the party. Windows has multipath as of Windows 8/Server 2012. Windows 8/8.1 and server 2012 support it in the form of SMB Multichannel. Read my comment further up. I've had it running for ~3yrs now to get 235MB/sec from my server to my desktop and back using a pair of Intel Gigabit CT NICs.

    It is NOT part of Linux last I checked, which was admittedly ~8-10 months ago. No NAS that I am aware of support multipath/channel for increasing network storage performance.
  • name99 - Monday, November 24, 2014 - link

    Real TCP multipath, not SMB multipath...

    SMB is a much easier problem than TCP, though less general. I agree that it's a good interim solution --- it solves 90% of people's problems with 10% of the work. Apple, however, have committed to TCP aggregation and the general solution, so, after looking at AFP (and I assume also SMB) multipath, they decided to focus all efforts on TCP multipath.

    I don't want to denigrate what MS has done --- multipath SMB is useful. But let's also not pretend that it is the same thing that I am talking about.

    As for Linux: http://www.multipath-tcp.org
    I don't care about NAS, because I don't use one, but you are right that that would be a significant real-world issue, and will probably be messily resolved slowly over the next five years.
    The case I care about is multiple macs to macs, ideally full TCP though I could live with just AFP or SMB; and this is what I hoped we'd get with Yosemite.
  • WatcherCK - Monday, November 24, 2014 - link

    Or the new Intel 40Gbit Fortville chipset to really make that home network zing along!

Log in

Don't have an account? Sign up now