The transition to 2.5Gbps Ethernet has not been an easy one for Intel. The company's I225/I226 2.5 GbE Ethernet controllers (codename Foxville), a prevalent choice on Intel platform motherboards for the last few years, has presented a fair share of issues since its introduction, including random networking disconnections and stuttering. And while Intel has been working through the issues with multiple revisions of the hardware, they apparently haven't hammered out all of the bugs yet, as evidenced by the latest bug mitigation suggestion from the company. In short, Intel is suggesting that users experiencing connection issues on the latest I226-V controller disable some of the its energy efficiency features, which appear to be a major contributor in the connection stability issues I226-V has been seeing.

To mitigate the connection problems on the I226-V Ethernet controller, Intel is advising affected users to disable Energy-Efficient Ethernet (EEE) mode through Windows Device Manager. The same guidance applies to Linux users as well. EEE mode aims to lower power consumption when the Ethernet connection is in an idle state. The issue is that EEE mode seems to activate when an Ethernet connection is in active use, causing it to drop out momentarily.

And while deactivating EEE does reportedly improve connection stability, deactivating it doesn't seem to be the ultimate solution. Intel has received reports that some users still experienced disconnections with EEE mode disabled. Furthermore, disabling EEE mode forgoes its intended benefits – such as reducing power draw by up to 50% when an Ethernet connection is idling – so it's not an option that cost-conscious consumers would normally want to disable.

Intel has also released an updated driver set for the I226-V/I225-V family of Ethernet controllers that automatically makes this adjustment. Specifically, the patch deactivates EEE mode for connection speeds above 100 Mbps, but users may have to disable it entirely if the workaround doesn't work with their combination of hardware. MSI and Asus have already deployed the new Ethernet driver for their respective Intel 700-series motherboards, so other vendors shouldn't take long to do the same.

In the interim, Intel will continue investigating the root cause and provide a concrete solution for motherboards with the I226-V Ethernet controller. The Foxville family of Intel Ethernet controllers has a long history of connectivity quirks – going back to the original I225-V in 2019 and E3100 in 2020 – ultimately requiring multiple hardware revisions (B1, B2, & B3 steppings) before finding solutions to many of its issues. As a result, it's not off the table that the I226-V Ethernet controller may suffer the same fate.

Source: Intel (via TPU)

Comments Locked


View All Comments

  • Rocket321 - Monday, March 6, 2023 - link

    2.5G seems like a perfect bump for home users over the ubiquitous 1G, however this article makes me glad I invested in used 10G enterprise gear instead. At twenty bucks per card, and surprisingly cheap pricing on optical cables, the investment cost was very affordable.
    I’ve had almost no issues over the past year running 10G on a few machines that benefit from faster than 1G connectivity. I wish consumer brands simply embraced 10G standards as the enterprise stuff is rock solid.
  • dwillmore - Monday, March 6, 2023 - link

    I have not yet gone down the >1Gb ethernet road, but I'm inclined to wait for 10Gb to come down. I had a temporary need for a short distance high bandwidth link a while back. The most reasonable solution I found was a few 54Gb InfiniBand cards and a single cable to wire the two machines back to back. What looked like a daunting task--IB can be very complex--turned out to be barely harder than an ethernet connection. Fedora had all the drivers and software packages available. All I had to do was install them and start *one* daemon. Then I just brought up the interface and routed IP over it. Total expendature was <$50 and I had 54Gb each way between two machines.

    The cards are old and power hungry, so I'm not leaving them in place all the time, but if I ever have a similar need, I'm set. It did make me look into the state of 1>Gb networking--which remains a mess of 2.5Gb devices for high prices, power hungry leftover server hardware for a reasonable price, and super expensive current gen 10Gb hardware. I'll wait.

    If I do end up needing more bandwidth, it's going to be between two specific machines most likely, so I'll still not have to upgrade everything. Maybe just point to point the two machines. If I need a third, I guess that would be the time to start looking for a small switch.
  • thestryker - Monday, March 6, 2023 - link

    I got a pair of dual port intel 520 sfp cards several years ago because switching hardware was so ridiculously expensive. I directly connected the two machines and then had separate 1gb connections for the rest of the network. This worked extremely well and sfp cards use less power than rj45 so even though they're old they're not quite the power hog.

    Last year I got a Zyxel 12 port switch (about $150 USD) with 2x 10gb sfp, 2x 2.5gb and 8x 1gb so I could consolidate hardware a bit and it has worked very well. Getting something similar from QNAP would probably be better, but the cost has been significantly higher.

    MikroTik might have something in the all 10gb 4 port variety which might be an option if you need more than 2 at a high speed and don't want to put down premium money.
  • DigitalFreak - Monday, March 6, 2023 - link

    I've found that it's easy to get 10G over Cat5e in typical homes. The runs are usually short enough that it's not a problem.
  • Gigaplex - Tuesday, March 7, 2023 - link

    I looked into options like that, but nothing was cost effective that could make use of the CAT6 already in my walls and also have a switch for multiple devices.
  • spamaway - Sunday, March 19, 2023 - link

    Where are you finding 10gbit optical cards at twenty bucks per? I think maybe you're going to make my year.

    I've found reasonably-priced switches (Mikrotik), reasonably-priced transceivers (10GTek), but all of the NICs I've found have been ~150 bucks... nearly as much as the switches.
  • Sivar - Monday, March 6, 2023 - link

    Intel has a bit of a history of unreliable ethernet controllers (remember the I225-V? The 82579LM?)

    Broadcom hardware may be a bit more expensive, but their hardware is rock-solid.
  • boozed - Monday, March 6, 2023 - link

    Funny, I was going to say "Intel's ethernet PHYs used to be the bees knees, what happened?"

    I remember them being the thing you paid extra for when you were buying a motherboard in the early days of integration.
  • Sivar - Tuesday, March 7, 2023 - link

    I remember the same, and I remember Intel having unbeatable process technology and the most efficient x86 CPU designs. It is sad that they have fallen so far, so fast, but I hope that Pat Gelsinger can right the ship.
  • Samus - Tuesday, March 7, 2023 - link

    I mean back in the day of Intel Pro 100 controllers, etc, they were a solid alternative to the more expensive 3COM 3c905. When 3COM disappeared and the competition was basically bottom barrel controllers from Realtek, Qualcomm, Atheros and the like, Intel was seemingly complacent in letting QC slide because everything after the 82573 has had issues, albeit, workaroundable issues.

    I'm glad to see the recent drivers (unfortunately not WHQL, so no windows update automated patch) disable the problematic issues because at the end of the day sure we are trying to save a watt of power here and there but reliability trumps efficiency in most circumstances.

Log in

Don't have an account? Sign up now