ASRock Rack B550D4-4L Motherboard Review: B550 Goes Professional with BMC
by Gavin Bonshor on May 20, 2021 9:00 AM ESTVisual Inspection
Perhaps one of the most notable design traits of the ASRock Rack B550D4-4L is that it is using a transposed AM4 socket and memory slots. This type of design allows for optimized airflow when installed into a 1U chassis, due to the positioning of the fans from front to rear (or rear to front) when inserted into this type of system. Despite the transposed sockets, it can easily be installed into a regular ATX chassis, although it would be more favorable to direct the air upwards and exhausting it out of the top, to simulate the natural thermodynamics of airflow. In the top-right hand corner of the board is the 24-pin 12 V ATX motherboard power input, while the board's aesthetic is primarily composed of a standard green PCB and uses small silver heatsinks to cool the CPU section of the power delivery, the SoC section, and the chipset itself.
The ASRock B550D4-4L has plenty of connectivity and headers available for users of all levels. Starting from the bottom left-hand corner, ASRock Rack includes a removable SPI Connector chip with a COM port header, a BMC SMBus header, an external speaker header, and an Intelligent Platform Management Bus Header (IPMB). For users focused on security, ASRock Rack also includes a Trusted Management Platform (TPM) header, with one USB 3.2 G1 Type-A header (two ports), and one USB 2.0 header (two ports). For cooling, there's a total of six 6-pin fan headers, although there are notches in the connectors to allow for the use of 4-pin and 3-pin cooling fans.
Focusing on the board's PCIe slot area, our sample from ASRock Rack has two taped-up slots, a half-length PCIe, and a smaller PCIe slot. These aren't present on retail models and this indicates that we have likely been shipped a pre-production sample. Looking at what should be there is a full-length PCIe 4.0 x16 slot, and a half-length PCIe 4.0 x4 slot, which both feature metal slot reinforcement.
Looking at storage capability, the B550D4-4L has just one M.2 slot that operates at PCIe 3.0 x4, which also includes support for SATA drives. Other options for SATA devices include a total of six ports, with four of the SATA ports driven by the chipset and includes support for RAID 0, 1, and 10 arrays, while the other two are powered by an ASMedia ASM1061 SATA controller.
Along the top of the transposed socket, is four transposed memory slots. These slots can accommodate up to 128 GB of system memory, with officially supported speeds of up to DDR4-3200. Both non-ECC and ECC memory are supported by the board, but the support itself is reliant on the processor. Users with Ryzen desktop processors can only use non-ECC DDR4, while users with Ryzen Pro models with Radeon Graphics and PRO technologies can use ECC memory. Using memory outside of the validated specification, such as ECC on regular Ryzen, means your mileage may vary.
Providing remote access and integrated graphics via a D-Sub output on the rear panel is an ASPEED AST2500 BMC controller. Users looking to access the system remotely can do so via a dedicated Realtek RTL8211E Gigabit Ethernet port on the rear panel. The ASPEED AST2500 BMC controller is located on the left-hand side of the board by the PCIe slots.
The power delivery on the B550D4-4L is using premium components but isn't adequately cooled for performance users. It features a 4+2 phase power delivery, which is driven by an Intersil ISL69247 PWM controller, which is capable of handling up to eight channels. The CPU section is located on the opposite side of the board from the SoC area, and ASRock Rack includes four Renesas ISL99390 90 A power stages designed to deliver a maximum of 360 A to the processor. The SoC section is using two Renesas ISL99390 90 A power stages.
One of the interesting aspects of the design is the B550D4-4L has a removable 32 MB BIOS chip. This means users with corrupt BIOS chips can easily replace them with a fresh chip, which is suitable for fast-paced professional environments. The housing itself is comprised of two black plastic clips with hinges that keep the BIOS chip securely in place.
On the rear panel at the far left is a D-Sub (DB15) video output for the BMC controller, with a Serial Port (DB9) also present. In terms of USB, the rear panel includes two USB 3.2 G2 Type-A, and two USB 3.2 G1 Type-A ports. Networking is interesting as the board has five Ethernet ports in total. Four of these are individually controlled by four Intel i210 Gigabit Ethernet controllers, while the fifth is powered by a Realtek RTL8211E Gigabit controller which acts as an access point remotely for the BMC controller. Finishing off the rear panel is a single HDMI 1.4 video output that allows use with integrated Radeon graphics.
What's in The Box
Included in the light, yet effective accessories bundle is a user manual, a single black SATA cable, a rear I/O shield, and an M.2 installation screw.
- User manual
- Rear panel I/O shield
- 1 x SATA cable
- 1 x M.2 installation screw
73 Comments
View All Comments
bananaforscale - Saturday, May 22, 2021 - link
This.mode_13h - Friday, May 21, 2021 - link
> I think 2x2.5G would be more appropriate for the target market of this board.Probably the main issue is that support for 2.5 GigE is (still?) uncommon on enterprise switches.
> Anybody considering 10Gbe is likely on the verge of adopting 25/40/100G anyway
A lot of people are just starting to move up to 10 GigE. Anything faster doesn't make a lot of sense for SOHO applications.
bananaforscale - Saturday, May 22, 2021 - link
Especially considering how overpriced 10G twisted pair NICs are.mode_13h - Saturday, May 22, 2021 - link
Eh, I got a pair 2 years ago for < $100 each. I've spent more on a 3Com 10 Megabit PCI NIC, back in the late 90's. Or maybe it was 100 Mbps.Samus - Monday, May 24, 2021 - link
Probably 100mbps if it was PCI. The 100Mbps ISA NICs were pretty damn pricy because by the time 100Mbps became commonplace, ISA was on its way out and PCI was becoming mainstream (Pentium-era.)Even now an 100Mbps ISA network card is $50+
PixyMisa - Friday, May 21, 2021 - link
By preference, but some datacenters use Cat6 and others use SFP. Others have already moved up to 25GbE. 10GBaseT is perfect for workstations, but not necessarily so for servers.mode_13h - Saturday, May 22, 2021 - link
> some datacenters use Cat6Really? For what? Management? Twisted-pair is very energy-intensive at 10 Gigabits, and can't go much above. So, I'd imagine they just use it for management @ 1 Gbps.
Within racks, I'd expect to see SFP+ over copper. Between racks, it's optical all the way.
Samus - Monday, May 24, 2021 - link
I've toured a lot of datacenters in my lifetime and I can honestly say I haven't seen copper wiring used for anything but IPMI and in extreme cases POTS for telephone backup comms though even this is mostly dead now as it has been replaced by cellular. Even HP ILO2 supports fiber for remote management, and you can bet at the distance and energy profile data centers are working with, they use fiber wherever they can.alexey@altagon.com - Friday, May 21, 2021 - link
Agree, companies are saving money and customers are paying more.Spunjji - Monday, May 24, 2021 - link
That's an opinion, for sure.