Comments Locked

58 Comments

Back to Article

  • duploxxx - Wednesday, March 11, 2015 - link

    board to differentiate with 18 ports, but anandtech does not test the performance of each type of port. then why bother posting this review? waste of time, for the rest this is just another board out of the 101
  • Gnarr - Wednesday, March 11, 2015 - link

    I have to agree with duploxxx. This board seriously needs a storage benchmark.
  • petar_b - Friday, January 29, 2016 - link

    no, board doesn't need storage benchmark, you lack some experience with SAS.
  • dicobalt - Wednesday, March 11, 2015 - link

    This board is for people who play games and happen to have a buttload of porn. Don't act like it's for anything else.
  • niva - Tuesday, March 17, 2015 - link

    This is exactly why we are extremely interested in this board. Is there a problem?
  • petar_b - Friday, January 29, 2016 - link

    Get at a TV and watch porn there; you can't afford this mobo anyway.
  • austinsguitar - Thursday, March 12, 2015 - link

    I will side with you duploxx... there is no reason to buy this board except to get those sata ports.... why in the HELL is this without that kind of test... anandtech.... what are you doin...
  • Tchamber - Friday, March 13, 2015 - link

    Yeah, that's much too harsh. Any one who has followed SSD/SATA on this site for the last three or so years knows that SATA is already saturated. There's no longer any reason to test a board's storage performance.
  • abufrejoval - Thursday, March 12, 2015 - link

    I believe that’s a little harsh!

    With the information you have been provided on this site, you can use your own powers of deduction to come up with answers.

    To expect that Ian go through all the potential permutations and variants is a little much, especially when the technical limitations are clear and testing software RAIDs is beyond the scope of the article.

    With everything south of the DMI passing through the equivalent of 4 PCIe 2.0 or lanes or 16Gbit/s of bandwidth, you can deduce that 10x 6Gbit SATA ports won't deliver 60Gbit/s to the CPU, especially with network, USB and all other peripheral traffic hanging in there as well.

    So if you hang SSDs on all these PCH ports, that's because you like them quiet or with fast access times, not because you expect their aggregate bandwidth to arrive at the CPU.
    Beyond the limits of the DMI I doubt you'll see any significant bottleneck inside the PCH so you can do your math: Any single 6Gbit SATA drive capable of delivering 6Gbit of data will very likely have that data actually arrive at that speed at the CPU. Any combination of SATA drives on the PCH will be bandwidth constrained at 16Gbit.

    The Avago/LSI 3008 at 8x PCIe 3.0 (63Gbit/s) has a pretty good chance to deliver top 8-port SATA (48Gbit/s) performance without creating much of a bottleneck, while 8x12GBit SAS (96Gbit/s) would potentially fail to deliver with that chip. On the other hand LSI chips typically deliver top performance, that is very close to the theoretical maximum the connections allow, even with RAID5 and RAID6 on the chip.

    So there you go: The Avago/LSI SAS HBA has a very good chance of delivering the aggregate bandwidth you expect even if loaded with top notch SSDs, while the 10Port PCH is most likely better used with spinning rust.
  • wyewye - Friday, March 13, 2015 - link

    Abufrejoval, that's not a review, that's a butt-load of theoretical assumptions. Assumptions are the mother of fuckups. In practice you may discover different numbers, hence we read reviews online before buying.

    Stop apologizing for Ian's incompetence/lazyness!
  • lordken - Friday, April 3, 2015 - link

    Rather you should apology for being lazy. abufrejoval did run some math for you, so its pretty clear that all 18x ports wont deliver full bandwidth. If you need to run 18x SSD at full speed then you probably need server board or something.

    If you want to troll go elsewhere.
  • petar_b - Friday, January 29, 2016 - link

    abufrejoval is not theoretical - 1 SSD on PCH can do 400Mb/s, but 4 SSDs simultaneously can give less 100MB/s transfer each. Now move that on SAS controller and each SSD gives 400MB/s.

    Once you start using SSDs on SAS - you will never go back to PCH.

    I posted article a year ago on the web showing differences I have measured with crystal benchmark - values are shocking... measurements were based on ASRock X79 Extreme11, same SAS controller just CPU and RAM bit slower.
  • wyewye - Friday, March 13, 2015 - link

    Good point duploxxx.
    I haven't seen a professionally done review on this site for quite a while.
  • petar_b - Friday, January 29, 2016 - link

    Motherboard has nothing to do with gaming, go for ROG if you wish gaming. Business use, rendering, 3D where storage needs to be fast and has to be SAS.

    We are using older generation of the board X79 with PLX and SAS controller. There are no words nor space here to explain you what performance increase we we hook up 8 SSDs (960G) on SAS instead of Intel...

    It's perfect for virtualization on or cloud realization - example: 128GB RAM + 6T SSDs can accomodate more than 20 vmware images, each with 4GB ram, running perfectly on Xeon.

    @dicobalt - keeping porn ? it's so sad ppl think no further than gaming and watching tv. go buy book and learn something... mathlab, 3d studio and earn money. actually get a tv and watch porn there.
  • 3DoubleD - Wednesday, March 11, 2015 - link

    Thanks for the review! This board is incredible. I run a storage server with a software raid (Unraid) and this board alone would handle all of my SATA port needs without the need for any PCIe SATA cards. The only issue is the price though. For $600 I could easily buy a $150 Z97 motherboard with 8 SATA ports and two PCIe 8x slots, buy two $150 PCIe 2.0 8x cards (each with 8 SATAIII ports), and I'd still have money left over (probably put it towards a better case!). Also, that's not counting the significant difference in CPU and DDR4 costs.

    Clearly this motherboard is meant for a use case beyond a simple storage server (so many PCIe 8x slots!), so I can't say they missed their intended mark. However, I really wish they could attempt something like this on the Z97 platform, more than 10 SATA ports but with no more than two (or three) PCIe 8x slots (even if some of them are 4x). Aim for a price below $250.

    I can't pretend it would be a big seller, but I know I'd buy one!
  • WithoutWeakness - Thursday, March 12, 2015 - link

    ASRock has the Z87 Extreme11 with 22 SATA III ports (6 from chipset, 16 from LSI controller) along with 4-way SLI support (x8,x8,x8,x8) and a pair of Thunderbolt 2 ports. I'm not sure how feasible it is to plan on using all of those with only 16 PCIe 3.0 lanes from the socket 1150 CPU but it sounds like everything you're asking for. Unfortunately it came in over $500, double your asking price.

    I think you'll be hard pressed to get what you're looking for at that $250 mark, especially on a Z97 board. Socket 1150 CPUs only have 16 lanes and every manufacturer who is willing to put an 8+ port RAID controller on board will also want a PLX PCIe bridge chip to avoid choking other PCIe devices (GPU's, M.2 drives, etc). The RAID chip alone would bring a $100 motherboard into the $200+ range and adding the PLX chip would likely bring it to $250+. At that point every manufacturer is going to look at a board with 14+ SATA ports, a PLX chip, and a Z97 chipset and say "lets sell it to gamers" and slap on some monster VRM setup, additional USB 3.0 ports, 4 PCIe 16x lanes, bake in some margin, and sell it for $400+.
  • 3DoubleD - Friday, March 13, 2015 - link

    Makes sense. Thanks for the suggestion, I'll look into it. Not sure why I've never come across this board, doesn't seem like it is sold at any of the common outlets I shop at (Newegg.ca, ect.). Still, going the add-in SATA cards seems to be the more economical way.
  • wintermute000 - Sunday, March 15, 2015 - link

    You wouldn't have ECC with Z97.
    Maybe unraid is better than ZFS/BTRFS but I still wouldn't roll with that much storage on a software solution (vs HW RAID) without ECC.
  • Vorl - Wednesday, March 11, 2015 - link

    This is such a strange board. with 18 SATA connections, the first thing everyone will think is "storage server". if all 18 ports were handled with the same high end RAID controller then the $600 price tag would make sense. As it is, this system is just a confused jumble of parts slapped together.

    Who needs 4 PCIE x16 slots on a storage server? That is an expense for no reason.
    Who needs 18 SATA connections that are all mixed around on different controllers that can't all be hardware raided together? Sure, you can run software raid, but for $600 you can buy a nice raid card, and sas to sata breakout cards and cables, and still be ahead due to full hardware performance with cache.

    Also, for a server, why would they not have the IGP port? I may be missing something, but I thought they CPU has integrated graphics.

    Just not an awesome setup from what I can tell.
    So.. why bother having all those sata ports if they aren't all tied to RAID?

    They add an LSI controller, and that isn't even what handles RAID on the system.
  • 1nf1d3l - Wednesday, March 11, 2015 - link

    LGA2011 processors do not have integrated graphics.
  • Vorl - Wednesday, March 11, 2015 - link

    ahh, like I said, I might have missed something. Thanks!

    I was just looking at the haswell family and know it does support IGP. I didn't know that 2011/-E doesn't
  • yuhong - Saturday, March 14, 2015 - link

    Yea, servers are where 2D graphics on a separate chip on the motherboard is still common.
  • Kevin G - Wednesday, March 11, 2015 - link

    Native PCIe SSDs or 10G Ethernet controllers would make good use of the PCIe slots.

    A PCIe slot will be necessary for graphics, at lest during first time setup. Socket 2011-3 chips don't have integrated graphics so it is necessary. (It is possible to setup everything headless but you'll be glad you have a GPU if anything goes wrong.)

    As for why use the LSI controller, it is a decent HBA for software RAID like those used under ZFS. For FreeNAS/NAS4Free users, the numerous number of ports enables some rather larger arrays or features like hot sparing or SSD caching.
  • Vorl - Wednesday, March 11, 2015 - link

    for 10G Ethernet controllers/Fiber HBAs you only need (need is such a strong word too, considering 10g ethernet, and 8gb fiber only need 3 and 2 lanes respectively for PCIe 2.0.) 8x slots. for super fast PCIe storage like SSDs you only need 4x slots which is still 2GB/s for PCIe 2.0 They would have been better served adding more PCIe 8x slots, but then again, what would be the point of 18 SATA slots if you were going to add storage controllers in the PCIe 16x slots?

    The 4x16 PCIE x16 slots makes me think compute server, but that doesn't mesh with 18 SATA ports. If the database engines were able to use graphics cards now (which I know is being worked on) this system might make more sense.

    It still makes me think they just tried to slap a bunch of stuff together without any real thought about what the system would really be used for. I am all for goign fishing and seeing what people would use a board like this for, except that the $600 price tag put's it out of anyone but the most specialized use cases.

    As for the LSI controller, like someone mentioned above, you can get a cheaper board with 8x sata PCIe cards to give you the same number of ports. More ports even since most boards these days come with 6x sata 6Gbs connections The 1mb of cache is so silly for the LSI chip that it's laughable.

    The 128mb of cache for the RAID controller is a little better, but again, with just 6 RAID ports, what's the point?

    The whole board is just a mess of confusion.
  • 3DoubleD - Wednesday, March 11, 2015 - link

    Similar to my thinking in my post above.

    If you are going for a software RAID setup with a ludicrous number of SATA ports, you can get a Z97 board with 3 full PCIe slots (x8,x8,x4) with 8 SATA ports. With three supermicro cards (two 8x SATAIII and one 8x SATAII because of the x4 PCIe slot) you would have 32 SATA ports and it would cost you $650. The software raid I use "only" accepts up to 25 drives, so that last card is only necessary if you need that 1 extra drive, so for $500 you could run a 24 drive array with a M.2 or SATA Express SSD for a cache/system drive. And as you pointed out, since it is Z97, it would have on board video.

    Basically, given the price of these non-RAID add-in SATA cards, I'd say that any manufacturer making a marketing play on SATA ports needs to keep the cost of each additional SATA port to <$20/port over the price of a board with similar PCIe slot configurations.

    As you said, if this board had 18 SATA ports that could support hardware RAID, then it would be worth the additional price tag. This is probably not possible though since 10 SATA ports are from the chipset and the rest from an additional controller. For massive hardware RAID setups your better off getting a PCIe 2.0 x16 card (for 16 SATAIII drives) or a PCIe 3.0 x16 card (if such a thing even exists, it could theoretically handle 32 SATAIII drives). I'm sure such large hardware RAID arrays become overwhelming for the controller and would cost a fortune.

    Anyway, this must be some niche prosumer application that requires ludicrous amounts of non-RAID storage and 4 co-processor slots. I can't imagine what it is though.
  • Runiteshark - Wednesday, March 11, 2015 - link

    No clue why they didn't do a LSI 3108 and have the port for the add on BBU and cache unit like Supermicro does on some of their boards. Also not sure why these companies can't put 10g copper connectors at minimum on these boards. Again, supermicro does it without issue.
  • DanNeely - Wednesday, March 11, 2015 - link

    There're people who think combining their gaming godbox and blueray rip mega storage box into a single computer is a good idea. They're the potential market for a monstrosity like this.

    You know what they say, "A fool and his money will probably make someone else rich."
  • Murloc - Wednesday, March 11, 2015 - link

    I guess this is aimed at the rather unlikely situation of someone wanting both storage and computation/gaming in the same place.

    You know, there are people out there who just want the best and don't care about wasting money on features they don't need.
  • Zak - Thursday, March 12, 2015 - link

    I agree. For reasons Vorl mentioned this is a pointless board. I can't imagine a target market for this. My first reaction was also, wow, beastly storage server. But then yeah, different controllers. What is the point?
  • eanazag - Thursday, March 12, 2015 - link

    It is not a server board. Haswell-E desktop board. I have no use for that many SATA ports but someone might.

    2 x DVD or BD drives
    2 x SSDs on RAID 1 for boot

    Use Windows to mirror the two below RAID 0 volumes.
    7 x SSDs in RAID 0
    7 x SSDs in RAID 0

    The mirrored RAID 0 volumes could get you about 3-6 GBps transfer rates on reads from a 400 MBps SSD in sequential read. Maybe a little less in write speeds. All done with mediocre SSDs.

    This machine would cost over $2000.
  • wyewye - Friday, March 13, 2015 - link

    "this system is just a confused jumble of parts slapped together"
    This is the best conclusion for this mobo.

    I think they hope marketing/sales guys will be able to bamboozle dummies to sell this as a 18 port raid server mobo. Anyone who spends 600$ on a high-end mobo without reading a review, deserves whats coming to them.
  • swaaye - Wednesday, March 11, 2015 - link

    That chipset fan is cheesy. They most definitely could have come up with a better cooling solution.
  • Hairs_ - Wednesday, March 11, 2015 - link

    As pointed out above, this board doesn't answer a single question any user is asking, and it doesn't fulfill any logical useage case.

    I'm struggling to see why it was reviewed other than the possible reason "reviewer only wants to review weird expensive stuff". Getting in a board whose supposed only reason to exist is the number of storage ports, then not yet the storage, and say " in sure it's the same as the one I reviewed a few years ago " is... Troubling. What was the point of getting it in for review at all??
  • ClockHound - Wednesday, March 11, 2015 - link

    What's the point?

    It's the new Anandtech, where the point is clicks! Catchy headlines with dubious content, it's how Purch is improving a once great review site. Thanks, Purch!
  • ap90033 - Friday, March 13, 2015 - link

    I think you may be on to something! Sad to see..
  • Stylex - Wednesday, March 11, 2015 - link

    I don't understand how motherboards still have usb2 ports. Did it seriously take this long for it to transition from usb1.1 to usb2?
  • DanNeely - Wednesday, March 11, 2015 - link

    At the USB1.1-2.0 transition time, Intel chipsets had at most 6 ports; and since the new standard didn't need any more IO pins so they could cut over all at once. Not needing any more IO pins is important because it's been the the limiting factor for chipset cost for a number of years; with the die size being determined by the number of output pins added.

    The bottleneck for USB3 has been the chipsets. Pre-IVB they had no USB3. IVB added support for 4 ports, haswell bumped it to 6, the 9x series chipsets that were supported to launch with broadwell were essentially unchanged from the previous model. As a result, mobo makers who wanted to add more USB3 have had to spend extra money on 3rd party chips to do so. Initially it was on USB3 controllers which generally ate a PCIe lane for every pair of ports added. More recent designs are using 4 port USB hub chips; which give better bang for the buck but still drive prices up.

    When skylake launches later this year, the situation should improve; its higher end versions will offer up to 10 ports. That might be enough ports to make all USB3 configurations possible in the mid range without either using a very small total number of ports or driving the board price up with extra controller chips. High end boards will probably still have some ports attached to a controller though, because Intel's expanding it's use of flexible IO ports and native USB3 will be competing with PCIe storage for IO pins. In both cases though, I suspect a number of boards will also expose the 4 remaining 2.0 ports; probably 1 internal header and 2 external ports. That won't be just a case of 'gotta use them all'; older OSes (eg win7) without native support for USB3 are easier to install if you've got a few 2.0 ports available; and there will be residual demand for 2.0 headers from people with older cases or internal card readers.

    The situation is similar with AMD chipsets. But due to their being only able to compete in the value segment of the market, they're behind Intel; topping out at 4 native USB3 ports.
  • Stylex - Friday, March 13, 2015 - link

    Ah, the pin count makes a lot of sense, thanks for that insight!

    But still, how much more could it possibly drive up the price to use a third party controller, $5-10? I'd pay that for all usb3, especially on a board like this one.
  • DanNeely - Sunday, March 15, 2015 - link

    Estimates I've seen over the last few years put a 2 port USB3-PCIe controller as adding $10 to the retail price of a board; a 4 port USB3 hub chip added $5. The caveats are that hubs only add ports not total bandwidth; which is fine if you're only interested in being able to plug a USB3 device into any port but use sufficiently few of them that sticking multiple high speed devices on the same hub isn't a problem. Controllers don't have that problem; but do need PCIe lanes. Those tend to be in short supply on intels 8x/9x chipsets. Using a 4-8 lane PLX on the chipset relieves that pressure somewhat but is another $10 or $20 to the board price. The situation there will be better for skylake due to the 100 series chipets having 28 high speed IO lanes instead of only 18; but that's partially counter balanced by m2/SataExpress connections needing several lanes each.

    The lack of native 3.1 support means that the next generation of mobos will probably go the controller route; not hubs to bump up the port count. With Intel rarely doing any major updates on the Tock versions of the chipset, it will probably be at least 2017 before external USB3 controllers mostly go way.
  • darkfalz - Thursday, March 12, 2015 - link

    You have a keyboard, mouse or gamepad that requires 100 MB/sec bandwidth, do you?
  • Stylex - Friday, March 13, 2015 - link

    with that logic we should still have usb 1.1 or serial ports. All USB3 just makes things easier to plug in, as you don't have go looking for the 'special' ports.
  • wmaciv01 - Wednesday, March 11, 2015 - link

    I just built a system with this board to host my ANS-9010's (x4 32GB in an 8 port RAID 0). Still kind of tinkering with it and exploring the BIOS. I installed the 40 lane 6 core Haswell and have 32GB Mushkin RED DDR4 2400 and a Samsun xp941 256GB as the boot drive. Case is an Xigmatek Elysium. Wish I could post some pics/bench stats for you guys.
  • darkfalz - Thursday, March 12, 2015 - link

    8 port RAID 0 - I hope nothing critical resides on that drive.
  • dishayu - Thursday, March 12, 2015 - link

    I'm not so sure if I would buy a motherboard without USB-C ports today.
  • darkfalz - Thursday, March 12, 2015 - link

    18 SATA ports but no onboard RAID-5 or 6 - almost a LOL moment, but I suppose you could do your boot SSD and then run a huge soft raid array...
  • Navvie - Thursday, March 12, 2015 - link

    I'd be interested to see ZFS benchmarks, assuming of course the LSI controller still allows the drives to be accessed as JBOD.
  • mpogr - Thursday, March 12, 2015 - link

    It doesn't look like the guys here heard about ZFS, otherwise they wouldn't complain about lack of hardware RAID...
  • mpogr - Thursday, March 12, 2015 - link

    This board could be interesting for either a bare metal or virtualised ZFS-based storage server. There is no need in hardware RAID for that one, just fast SATA ports, fast CPU and lots of RAM. Having PCIe 3.0 slots is beneficial for Infiniband cards and, without a switch (which for 40Gbit+ IB costs 1000s), you'd need a few of them, so multiple x8 or slots are beneficial. ECC RAM support (with Xeon CPUs) is a must for such a server as well.
    What's missing? First and foremost, onboard graphics and IPMI! You want to be able to run this sucker headless. Second, what the heck is with the price? Comparable Supermicro boards (e.g. X10SRH-CF, with IPMI!) cost $400. Yes, they don't support multi-GPU graphics or overclocking, but who needs those on a storage server? I think this board completely missed its target audience...
  • JohnUSA - Friday, March 13, 2015 - link

    $630 ? Ouch, no thanks.
  • mapesdhs - Monday, March 16, 2015 - link

    Without any cache, the SAS controller is useless. Lack of cache really kills 4K performance,
    especially with SSDs (I've tested this with a P410 vs. other cards). With cache included, even
    just 1GB, 4K performance can be amazing, over 2GB/sec.

    Hence, as others have said, better off using a different cheaper board and a separate SAS card
    that does have cache and a BBU, including any numerous X79 boards, though if storage is a
    focus then something with 10GigE support makes more sense, XEON, ECC (unless one is using
    ZFS I guess), in which case one is moving away from consumer X79/X99 anyway.

    mpogr makes some interesting points; thing is, there are proper XEON server boards available
    for less anyway, put a SAS card on one of those and away you go, no need to worry about any
    consumer-related mbd issues. Afterall, if one is going to be using a XEON and ECC then oc'ing
    doesn't matter at all.

    I was considering an X79 Extreme11 a couple of years ago for a pro system I was building for
    someone (they couldn't afford a dual-XEON setup), but the lack of SAS cache meant it was
    not worthwhile. Used an ASUS P9X79 WS instead and I'm glad I did.

    Ian.
  • jasica - Thursday, March 19, 2015 - link

    as a professionally i agreed with duploxxx there are no reason to buy this board. because every one is not like gaming !
    <a href="http://www.topmediabox.com/">Real TV</a>
  • Native7i - Thursday, April 23, 2015 - link

    I expected more USB ports at rear panel
  • Saelnaydar - Friday, May 15, 2015 - link

    Hello,

    Not all sorage ports are usable the way you want or the way you think, raids ports are only bound to one part of connectors.

    If you are using SATA-M2 cards, the connectivity shares bandwith with some ports and you have to figure out what are unshared ports that supports raids
    SSD should be bound to some special ports and not shared with M2, raids .. Wiring setup and nightmare...the storage part is not as easy at it sounds.

    More importantly for a 700 Euro card !
    The 3 Way SLI Does not work out of the box, a big drawback for a 4X16x 3.0 PCIE Motherboard with 2 build in chipsets supporting up to 4XSLI
    I finaly made it to 3 ways SLI
    First 3 way it was buggy and achieved lesser performance than 2 Way SLI
    With a lot of cards switch and testings i finaly "as a last option" updated BIOS to 1.2 (wich was not there in january whan i bought the MB)

    The Bios flash to 1.2 of my Asrock X99 Extreme 11 made work the 3 ways SLI configuration.
    There was no indication on forums or in bios update or release notes that the bios was fixing SLI but IT does for me.
  • afbfxt - Wednesday, November 11, 2015 - link

    I promise this will be good and I guarantee you cant make this stuff up.

    The ASRock X99 Extreme11 advertizes itself as a MOBO that has 8 SAS-3 ports on it. However the SAS-3 ports are the same form factor as a seven-pin SATA connector. In the MOBO manual it states "For connecting SAS HDDs, please contact SAS data cable dealers" because ASRock does not include the SAS-3 cables necessary. So I contacted all the SAS-3 cable manufactures in the USA and they all said they have never heard of a SAS-3 cable that had a SFF-8482 connector on one end and a seven-pin SATA connector on the other end that supports 12 GB/s. So I e-mailed ASRock support and asked them if they knew where I could get a SAS-3 cable like this and they never responded. So I did a Google search to see if anyone was having the same problem and I found one other person that was. The whole reason why ASRock is charging over 600.00 dollars for this board is because it offers an LSI SAS 3008 SCSI controller on board but obviously it's completely useless, so they're just ripping you off.

    At first I was extremely angry but after a few days I found this whole incident to be hilarious.
    I mean, can you imagine a company doing something like this. LOL!!!!!!!!!

    I would never buy anything from ASRock ever again and I don't recommend anybody buying anything from them either. I will stick with and recommend to others more reputable brands like ASUS, Gigabyte, MSI etc.
  • petar_b - Friday, January 29, 2016 - link

    Did you try using regular SATA cable and power connector from your power supply ? You won't get far if you wish to use SFF-8482 ....
  • afbfxt - Wednesday, November 11, 2015 - link

    I promise this will be good and I guarantee you cant make this stuff up.

    The ASRock X99 Extreme11 advertizes itself as a MOBO that has 8 SAS-3 ports on it. However the SAS-3 ports are the same form factor as a seven-pin SATA connector. In the MOBO manual it states "For connecting SAS HDDs, please contact SAS data cable dealers" because ASRock does not include the SAS-3 cables necessary. So I contacted all the SAS-3 cable manufactures in the USA and they all said they have never heard of a SAS-3 cable that had a SFF-8482 connector on one end and a seven-pin SATA connector on the other end that supports 12 GB/s. So I e-mailed ASRock support and asked them if they knew where I could get a SAS-3 cable like this and they never responded. So I did a Google search to see if anyone was having the same problem and I found one other person that was. The whole reason why ASRock is charging over 600.00 dollars for this board is because it offers an LSI SAS 3008 SCSI controller on board but obviously it's completely useless, so they're just ripping you off.

    At first I was extremely angry but after a few days (and my RMA approval) I found this whole incident to be hilarious. I mean, can you imagine a company doing something like this. LOL!!!!!!!!!

    I would never buy anything from ASRock ever again and I don't recommend anybody buying anything from them either. I will stick with and recommend to others more reputable brands like ASUS, Gigabyte, MSI etc.
  • petar_b - Friday, January 29, 2016 - link

    No it's not rip off. each plx has price of 60 usd (you can find this on web), SAS controller that one would buy as PCIe card is aprox 300, meanis you pay 420 usd here just for good SAS implementation (meaning you need PLX or you can't run dual graphic card setup without PLX - don't forget that SAS takes 4 lanes).

    Yes board could be cheaper, but it's a product for narrow audience... they have to compensate. ASUS WS, Gigabyte also use PLX, you can see how prices increase rapidly when they provide PLXes ...

    There is no way out wihout PLX if you want SAS and multi graphic card setup.
  • d_sing - Tuesday, March 8, 2016 - link

    Does anyone know if this board will support 8TB HDDs on all 18 ports at once? (i.e. 18 x 8TB = 144TB) Considering this board for a server build...

Log in

Don't have an account? Sign up now