I like the idea of integrating 10Gbase-T and would be willing to pay a $200 premium for it; I just wish it wasn't on a motherboard that was designed with no regard to cost by integrating every other chip imaginable.
If they had a cheaper $350-$450 model that had a single 10GBase-T port I would be interested. There already are enough SATA ports and PCIe lanes in the X99 chipset; just add a single port 10GBase-T controller to a mid range board and you have a good product. You can get single port 10GBase-T cards for under $250, and an X99 Extreme3 board for $200, so I don't see how combining the two could be more expensive than that.
Why exactly is 10Gbase-T so expensive, especially when compared to 1000Base-T? Is it because the technology inherently costs a lot to produce? Or is it because the primary customers of the technology regard those prices as pocket change?
10gig is expensive because it is impractical. Not that it is unnecessary, because there are plenty of uses for it (could certainly use it in my home), but just impractical over copper. 10meg was fine, and 100 meg was considered pushing copper to limits. 1gig was considered impossible just over 10 years ago, but by upping the power made it work. To get 10gig to work they took that approach to an extreme. You need PCIe lanes, and lots of power to get it to work, and that is just not going to happen on consumer equipment.
But we do need a 10+gig solution for home networking. It needs to be affordable, easy to deploy, and low enough power to work with passive cooling in everything from a laptop to a server. I don't know what the answer is, but it certainly isn't going to be twisted pair copper, and optical has its own issues for in-home deployment... no good answers here.
IIRC reading somwhere in the last year that we were one or two process shrinks away from crossing a threshold that should result in much cheaper and lower power 10GBE chips.
Correct. GbE experienced the same thing, though deployment WAS faster, relatively speaking.
A dual port solution has a TDP of around 14w for 10GbE right now. 15 years ago, 1GbE had pretty close to the same ~7w per port TDP and required a beefy heatsink or active cooling (a lot of dual/quad port GbE cards neede active cooling until the mid/late 2000's).
10GbE IS coming (to "the home"). Just slowly. I doubt we are 2 years out, but I'd be suprised if we were 5 years out from it being pretty standard on mid/high end motherboards, as well as affordable switches and NICs (by afforable I mean less than $50 per port).
Personally, I look forward to 1Gb being the norm for all router / modem ports. There are still far too many 100Mbit solutions for the bundled stuff you get from your ISP and even moderately priced routers.
I don't think that will change too much. The bundled solutions from ISPs are designed to deliver the internet speed you're paying for and no more. If they can save a dollar by only using 100Mbit Ethernet ports they will. You have to buy your own equipment if you want anything good.
There are a lot of ISPs offering over 100Mbps speeds now, with some in fact already offering 100Mbps+ even on their mid-level tiers! So your argument isn't going to hold up for long.
Most ISPs don't deliver data faster then 100Mbit, so it would be wasteful to support higher speeds. Nothing is stopping you from connecting a 1Tb switch to your cable-modem like the rest of us do.
Most ISP's in USA and some other countries are both very greedy and very skinflint like Apple. Therefore client access that is both one of the most expensive and slowest and / or restricted by volume in the world.
Or buy a modem instead of renting it. A month ago I upgraded from from 15MB docsis 2 to 75MB docsis 3 cable. My shiny new Moto Surfboard has a gigabit port (needed to support its max 120/133mbps data rate). I just need to replace my old 100mb/N router now; turns out it bottlenecks at ~50mbps. That hasn't been an issue for internal traffic, all my wired traffic runs over a gigabit bit switch (wanted more than 4 ports and at the time I bought it there was still a significant price premium for gigabit routers), but replacing it's going to be my next medium sized tech purchase. (I was hoping ac routers would get cheaper first; but can't see buying a gigabit-N model now.)
Not really true Jeff - AT&T is the only one of the top 5 ISPs not offering 100Mbps+ service now (unless you count their very limited availability of Gigabit service). Fast ethernet tops out around 80 Mbps (or maybe 90 if you're fortunate) in real-world usage. An ISP advertizing 100Mbps service, such as Cox, typically delivers slightly over 120Mbps. So 100Base-T isn't "going to become a bottleneck" for ISPs, it already is.
Almost every modem or gateway that my company is putting in to the field along with most of our competitors all support Gb on the LAN ports and most of them have Gb on the WAN port as well. Even if the higher speed offerings are not available in your neighborhood you can still take advantage of the Gb LAN ports for in home wiring.
The next big bottleneck that you will see change over the next year or so will be the built in wifi migrating from single channel 2.4ghz 802.11n to 2.4 and 5 ghz MIMO 802.11ac now that the chipsets have dropped in price and are spec'd for our future Gateways and modems.
The problem isn't consumer equipment or die-shrink, power requirements, etc. It's PCIe lanes. Consumer-class Intel chips have very few PCIe lanes left over once you give x16 to a video card. Give consumer-class chips the 40 lanes of the 2011-socket chips and we'll have enough internal bandwidth (that's PCIe lane to CPU/mem bandwidth) to make 10gig ethernet useful.
Skylake will go a long way to mitigating that problem. In addtion to upgrading the chipset to PCIe 3.0, It's adding 10 more flexible IO ports (can be configured as a PCIe lane, Sata port, or USB3 port). That should give enough extra capacity to add a 10GB port to a consumer board without needing a PLX or cutting into GPU resources.
Your overall theme is correct, the details are not. 802.3ab (gig ethernet over twisted pair, ie what most of us care about) was ratified in 1999, and Apple was shipping machines with Gig ethernet in 2001. Moreover these machines were laptops --- gigE did not require outrageous amounts of power, but it DID require what a few years earlier might have been considered outrageous amounts of signal processing, unlike the fairly "obvious" (hah!) signal processing required for 100M twisted pair ethernet.
Yes, it is impractical but not really expensive if you can do without a switch or need only few ports and can go SFP+. Just not sure why they decided to integrate 10GBase-T PHYs instead of SFP+ ports and be done with it; allows for a ton more flexbility and is much cheaper.
The problem is really that it's *very* hard to really use the 10GBit/s. While using 1Gbit/s is no problem anymore properly saturating a 10GBit/s still tricky; we tried that with our shiny new server hardware and failed to get more than with bonding so we decided not fork out a big chunk of money for a switch with enough SFP+ ports.
Daniel, I would think copying files between two PCs with SSDs should be enough to make 10GBase-T worthwhile; besides, doesn't the "bonding" you're using still limit a single connection to the 1GBit speed? So when you say "*very* hard to really use" aren't you assuming mainstream usage models? I ask because you obviously have experience with it, but it sounds like it's within a certain assumed context. I also wonder why it's necessary to saturate the network in order to believe you're getting the value out of it; I'm accustomed to environments where you want to avoid saturating most computer resources.
The bonding may be more cost effective and practical for a typical corporate data center, but for copying 4K video files around I'd rather have 10GBase-T now. And obviously this need will only grow as time goes on.
> Daniel, I would think copying files between two PCs with SSDs should be enough to make 10GBase-T worthwhile
Nope, because the underlying protocols (NFS, SMB...) down to TCP/UDP and are out of the box not prepared to push data that fast.
> besides, doesn't the "bonding" you're using still limit a single connection to the 1GBit speed?
Coincidentally it does here but that's not necessarily the case. But again the point is: it's more than unlikely to saturate a 1000Base-T let alone a 10GBase-T link with a single connection except for synthetical benchmarks.
> So when you say "*very* hard to really use" aren't you assuming mainstream usage models?
I guess you could call our usage mainstream however most users would probably disagree at a hardware value of around 15k€. ;) The use case is a multi-server VM hosting (only for company use) setup, we do have 3 2-socket Xeon servers and a NAS. Most of the images used by the VMs are on the NAS served by NFS. 2 of the servers do have 10GBase capable SFP+ ports so just for tests we connected them back to back and ran some benchmarks (exporting storage on an SSD to the other server for instance), tweaked a little and reran them but the results we're not decent enough to warrant a multi k € upgrade of the switch and more importantly the NAS which can not even saturate the 1000Base-T links.
> I also wonder why it's necessary to saturate the network in order to believe you're getting the value out of it
Because the only important KPI is the QoS. Improving the bandwidth without having a bottleneck (as indicated by the saturation) does not result in a better QoS, so basically it'd be wasted money and no one likes that. ;)
> The bonding may be more cost effective and practical for a typical corporate data center, but for copying 4K video files around I'd rather have 10GBase-T
Chances are you will be *really* disappointed by the performance you might get out of that. If you want to copy files you'd be much better off looking at a eSATA or Thunderbolt solution...
Thanks for being so specific, Daniel. It all makes sense within your environment.
However, I must say that this definitely doesn't apply to all environments. eSATA and Thunderbolt may be good for something like DAS, but tricky or impossible to use for peer to peer data transfer, and no good if there's any distance between them. Ethernet infrastructure is already well established down to the OS level, meaning virtually any two devices with Ethernet ports can generally share data and communicate in any way necessary, right out of the box.
I wouldn't be shocked if you told me 10GBase-T currently tops out around 3-4Gbps in many real-world implementations, because that was the case with Gigabit Ethernet in earlier versions of Windows, for example (before Windows 7, I believe). But then we have to start talking about which OS, hardware, and drivers are involved, because I wasn't seeing the same problem in other OSes at the time. I think a lot of the HW/SW you were using would be prime candidates for non-optimal implementations of 10GbE, or be subject to bottlenecks elsewhere. And I have no doubt your solution was good. But this doesn't mean it's *very* hard to take advantage of the bandwidth, when a simple file copy will do it.
We've been through this a few times already with Ethernet, Fast Ethernet, and then GbE. With Ethernet, most implementations ran around 300KB/s until 3COM came out with their Etherlink III. Suddenly I was seeing around 900KB/s+, or roughly 80-90% of the theoretical maximum. I saw a similar pattern repeated each time, with each new Ethernet standard starting out performing at only 30-40%, then moving up to perhaps 60-70%, and eventually landing at 80-90% of the maximum. So I'm making a rather educated guess that if you use the right OS you can get at least 6-7Gbps out of the ASRock motherboard when it's released, using a very real-world (not synthetic) test of copying files using the OS' copy command. This will make 10GbE very useful in some real-world situations right now.
> But this doesn't mean it's *very* hard to take advantage of the bandwidth, when a simple file copy will do it.
Except that it doesn't. For 1000Base-T you already have to work a bit to get the most of out of a single GB/s link especially if you only have a single client, for 10GBase-T it's impossible. But don't take my word for it, you can read it right on Anandtech... http://www.anandtech.com/show/7608/netgear-readyna...
The testing setup page of the article mentioned that they got a 10GB switch for running the test network; but there was no mention of getting any 10GB cards to run single client tests for. Looking at where the tests all topped out at; I'm almost certain that the bottlenecks were 1GB ethernet links that each client was running on. Since Intel recently launched its new 10GB cards; maybe they can be convinced to donate a few for review and permanent inclusion in the testbed.
I don't quite understand your reasoning, Daniel. Two current Windows machines connected through a basic gigabit switch is about as simple and easy as it gets. Plug and play 100MB/s+. Doesn't even cost very much!
No need to try to interpret someone else's tests when you can simply try it for yourself! Once you've established your baseline - that it achieves its full potential (80%+) in a rudimentary configuration, you can proceed to figure out how to avoid bottlenecks and achieve similar performance in whatever complex target configuration you desire.
But don't undermine the basic technology as if it doesn't reach its potential, when in fact it does. If you can't achieve similar results your infrastructure is dragging it down.
I agree that 10 gigabit is really impractical over copper, which is why you just pickup an SFP NIC and a switch with SFP uplink. If your application isn't for raw downstream throughput (like 10 gigabit backbone with a few dozen 1 gigabit clients) then you can do point-to-point 10-gigabit with SFP NIC's and media converters (which are an added cost.)
Most 10 gigabit networks I've built are for the former application. One was for imaging machines at a computer recycling company where downstream throughput was key (I did dual 10-gigabit uplinks from the imaging server to dual 48-port layer 2 switches) and the other was for an office that had a demanding SQL database from 25 or so simultaneous connections, so a single 10 gigabit uplink to a 26 port UNMANAGED gigabit switch was adequate.
Most of the time a teamed gigabit NIC with auto-failover is adequate for networks of <50 nodes.
Both. They have had a hard time producing 10G chipsets that use a reasonable amount of energy. They've been plagued with heat and electricity issues in both the client side and switch side. It has taken forever to get base T switches. Fiber ones have been around for a while.
The other side is that they can get away with charging a lot, so they do. Competing technologies are fibre channel and infiniband - go price those out with a switch.
Anandtech shown a few years ago that performance for one port 10G connection was better than 4 1Gb ports anyway you looked at it.
Supply and demand. Plain and simple. Not a lot outside of the enterprise/datacenter requiring it, and the enterprise vendors know what they can charge for it. Back in the SOHO market, not many care to provide it so...the market/supply is small.
Because 1000 Mbit/s is way bigger than raw BluRay at 54 Mbit/s, so you can do 20 HDTV streams or 4 UHD streams over GigE if your drives can keep up? Because you need two wired machines in close proximity, since what's the point of a 10GigE server if all that connects is laptops and tablets and smartphones and mostly by WiFi? The only place I know it's been used is between a server and a SAN, topping out at ~700 MB/s (5.4 GBit/s) actual performance it's quite neat but for a very specific niche in number crunching.
One could alternatively buy a PCIe x4 10-gigabit dual-port SFP card for <$100 (eBay) and a switch with 10-gigabit SFP uplink for the same price. You'd even get the added benefit of layer 2/3 management.
But I like the idea of having it all integrated, too, but you're right, when it comes to integration, these OEM's seem to "pull a Samsung" and throw it all in plus the kitchen sink. It's overkill.
I just want X99 to come to ITX already. They neglected it with X79 even though its entirely possible, because overclocking is pretty much out of the question and that's 'suicide' to make something non-overclock I guess...
X99 + ITX won't happen any time soon. The size of the socket and DRAM leaves little space for anything else, and out of the CPU PCIe lanes you'll be able to use 16 on a single PCIe slot. Unless there is significant demand, motherboard manufacturers see that as a waste of resources and users won't want to buy a 40 PCIe lane CPU and not be able to use most of them. I have suggested cutting down to dual channel memory to save space for a chipset and some controllers, but the same argument: users who pay for quad channel support won't want dual channel. Then find space for the power delivery, SATA ports etc. As said, if there is *demand* (1000s of units, not 10s) then there might be a compromise.
I have a very light EE background but if they move the vrm array to a vertical riser and use sodimms there is plenty of room for a few integrated components. But you are right there is physiclally room for only a single PCIe 16x slot, but many people don't care about 40 lanes. The raw throughput of the memory bus is my main attraction. It's annoying that the x58 chipset has more raw memory bandwidth than all the mainstream platforms today...just because it was triple channel.
"Regular readers of my twitter feed might have noticed that over the past 12/24 months, I lamented the lack of 10 gigabit Ethernet connectors on any motherboard. My particular gripe was the lack of 10GBase-T, the standard which can be easily integrated into the home."
Ian, haven't Supermicro sold a few X540-based motherboards for quite a while now?
Hmm, impractical? We make 100GbE solutions(100 Gb/s on an single optical line) in mass production from 2012. Many ISPs started recharge their network with a new tech hardware. Don't make lie for that practical or impractical. If we all follow of your way of thinking, humankind would not go out of the Stone Age.
I... did you read more than one sentence before replying? Because he's completely right and not just being stubborn and hidebound. Copper wire is a significant bottleneck these days. Optical is too delicate for home use, even ignoring the cost of decent fiber.
"TOSLINK proves it's doable cheap and reliably" I imagine you might say. But TOSLINK is currently limited to a tad over a hundred megabit, and is still quite delicate next to it's copper brethren. Home users don't think twice about tying a knot in a cord to take up excess slack or keep it from wandering off. That breaks even the plastic fibers in TOSLINK immediately, to say nothing of a quality GLASS fiber. And god help the poor soul who runs his chair over a cable. That's part of why TOSLINK never saw widespread use, and we run our SPDIF over copper through an HDMI jack these days.
The equipment you're talking about is VERY expensive, and it's not just because it's business hardware.
Am I saying there will be no more advances in networking? No, of course not. Am I saying that the reign of copper is coming to an end? You better believe it. Am I saying optical fiber is too delicate to be trusted to the home users that apparently run cable with a pair of hamhocks in boxing gloves instead of their hands? Ayup. Am I saying we'll see 10-gigabit radios in the home before we see 10-Gb cables in the home? God, I hope not.
Asrock thinking for a near future. Not need copper cable to the street swich. Copper will be connection in very short line between optical to copper IO adapter and PC. In distances of decimal inches copper cables have no problem with 10Gb/s speed.
Both Aquantia and BRCM have been offering 10GE on more than 100m of copper cable for over 5yrs now. The issue was power, but the new generation of 10GE PHYs in 28nm tech will bring PHY power down from the current ~7W/port to almost half. Intel will have the 10GE NICs with Aquantia 28nm PHYs in 2015, less than a year away.
Assuming you're talking about Intel Fortville, it's already launched, but they're quoting the same 7W TDP for 2x 10gb, 4x 10gb/1x40, and 4x10/40gb configurations. Unless they're thermal binning, or most of the power is being consumed by components that aren't sensitive to the data rate the equal TDP for all 3 parts seems odd to me.
"In order to deal with the heat from the Intel X540-BT2 chip being used, the extended XXL heatsink is connected to the two other heatsinks on board, with the final chipset heatsink using an active fan."
Looking at the pictures, I can't see any connection between the two sinks around the socket and the one at the bottom of the board with a fan.
I'm curious, what is the use case for 10GbE at home? I have a server at home with a 6TB RAID5 array and it's no big deal if it takes 4 minutes to transfer a freshly ripped 40 GB mkv file over 1GbE.
Is it iSCSI? Who is using iSCSI at home anyway?
As a workstation board for a professional with an iSCSI storage array, this makes some sense... but for even the odd-ball "prosumer" I just don't see the lack of 10GbE being as big a shortcoming as you make it sound. Especially when it doubles the cost of an otherwise very competent workstation motherboard.
I can't stand this argument. Hey Jeff just because you have no use for, or can't see why any would have a need for faster technology, doesn't mean no one else does. After all, 640K should be enough for anyone, am i right?
Define no use? I use a 2x1GbE with Windows 8.1 and SMB Multichannel to get me 2GbE of true bandwidth. I am changing around my storage, so I am heavily storage bottle necked by my single 3TB 7200rpm drive, but in a couple of months I'll be a 2x3TB RAID0 in my server and desktop instead of the mishmash of 1x3TB and 2x1TB I have going on right now (it used to be 2x2 and 2x1, but one of the 2TB disks started getting a little flakey).
Once I am done with the upgrade, I should be able to easily push 300MB/sec over a network pipe sufficient for it. I don't NEED to do that, but I certainly wouldn't mind being able to utilize it. SSD is dropping in price faster than HDDs, and seem to have been for awhile. Price for storage parity is no where close yet, but still, it MAY be someday. Or it might be that most people have no issues with SSD storage, even for "bulk things" like video collections.
My 1.7TiB of data took ~3hrs to transfer over because of disk bound limitations. Once I build it out to a 2x3TB array, it should only be around 2hrs to transfer things to it (and I'll try setting up a 3rd GbE link using the onboard NICs in the machines and running a temp 100ft network cable, just to see if the onboard NICs will play well with the Intel Gigabit NICs I have in the machines...because network porn).
I absolutely do not need 10GbE. I can however desire it and I can see really wanting it once my storage is even faster. I can grok a situation in a few years where my HDDs are getting long in the tooth where it might make sense to pay the resonable premium and just get SSDs for my bulk storage. THEN 10GbE would make a lot of sense to take advantage of the speed provided.
Plus, 10GbE can provide advantages like running remote storage as local storage, especially with iSCSI, as the significantly reduced latency of 10GbE is really needed in some ways to not make that painful.
You is a new William Gates, with a new 640K urban legeng. You is right for itself in this moment. But you make a big mistake for a near future and that we all need when future coming to us.
Oh my gods, no. Gigabyte for server/workstation? Gods, no, please. I worked on Gigabyte mobo repair service for a couple of years, we had also some drop-ins of other brands. After this I'm genuinely shocked, that Gigabyte still even exists, so do MSI. All of their products, except MSI big bang fusion and 9) are complete and utter crap, from layout (chipset heatsink mounting screw blocking pci-express slot?) through electrical work (no galvanic separation between sound card and usb), to pcb layers coming apart, especially in those with more copper. MSI ofter has misplaced components (I resoldered thousands of mosfets and other small smds just by 1mm, and everything starts working), and absolutely no onboard current spike protection. ASRock was always making ingenious, but often low quality motherboards (caps! MOSFETs!), but everything else was often decent - they have skilled and brave engineers in-house. Recently, when they switched manufacturing, ASRock became surprisingly solid. I find their workstation and Extreme6+ boards astonishingly good quality, on par with ASUS (which dropped in quality recently).
Need a fast San rig to run a virtualized environment. Ok cool ….already shopping 10Gbe for my home. Lots of decent new 10Gbe nics on EBay for cheaper than you think however these Intel will do if the price is right. Already like Asrock, kudos for the M .2 support along with PLX 8747 and tons of PCIe slots. LSI is great as well, you can find new re-branded LSI raid cards all over the place …way cheap. Why would I pay for an on-board LSI solution that doesn't support Raid 5? Cut the board price by a third (more please) and keep the LSI chips.
A partial (admittedly less than ideal) step in this direction is to provide multi-path TCP in the OS. In the common situations of interest one is running multiple of the same OS around the house/office, so both ends will support mTCP. One can then aggregate a gigE connection, whatever your WiFi offers, and one or two or three USB (and/or thunderbolt) ethernet adaptors.
Yeah, this doesn't give us 10G; but it can easily and cheaply give us 2 or 3G, which is 2 or 3x faster than what we have today...
[Note that this is aggregation at the TCP level. Aggregation at the ethernet level already exists, certainly in OSX, I assume in Linux, but it's finicky and requires special hardware. Working at the TCP level, mTCP should just aggregate automatically and cleanly over all the network ports you may have.]
Which raises the question of: what's the status of mTCP? - supposedly we had a trial experimental version in iOS7 (which was only used for Siri) - I saw a few reports that it was part of the Yosemite betas, but it's not there as of 10.10.1 - I believe it's part of Linux (but I've no idea what that means in terms of for which distros it is by default available and switched on) - Windows, as usual, seems late to the party; I've not even heard rumors about this being targeted as part of Win 10.
I think you are late to the party. Windows has multipath as of Windows 8/Server 2012. Windows 8/8.1 and server 2012 support it in the form of SMB Multichannel. Read my comment further up. I've had it running for ~3yrs now to get 235MB/sec from my server to my desktop and back using a pair of Intel Gigabit CT NICs.
It is NOT part of Linux last I checked, which was admittedly ~8-10 months ago. No NAS that I am aware of support multipath/channel for increasing network storage performance.
SMB is a much easier problem than TCP, though less general. I agree that it's a good interim solution --- it solves 90% of people's problems with 10% of the work. Apple, however, have committed to TCP aggregation and the general solution, so, after looking at AFP (and I assume also SMB) multipath, they decided to focus all efforts on TCP multipath.
I don't want to denigrate what MS has done --- multipath SMB is useful. But let's also not pretend that it is the same thing that I am talking about.
As for Linux: http://www.multipath-tcp.org I don't care about NAS, because I don't use one, but you are right that that would be a significant real-world issue, and will probably be messily resolved slowly over the next five years. The case I care about is multiple macs to macs, ideally full TCP though I could live with just AFP or SMB; and this is what I hoped we'd get with Yosemite.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
50 Comments
Back to Article
The Von Matrices - Monday, November 24, 2014 - link
I like the idea of integrating 10Gbase-T and would be willing to pay a $200 premium for it; I just wish it wasn't on a motherboard that was designed with no regard to cost by integrating every other chip imaginable.If they had a cheaper $350-$450 model that had a single 10GBase-T port I would be interested. There already are enough SATA ports and PCIe lanes in the X99 chipset; just add a single port 10GBase-T controller to a mid range board and you have a good product. You can get single port 10GBase-T cards for under $250, and an X99 Extreme3 board for $200, so I don't see how combining the two could be more expensive than that.
r3loaded - Monday, November 24, 2014 - link
Why exactly is 10Gbase-T so expensive, especially when compared to 1000Base-T? Is it because the technology inherently costs a lot to produce? Or is it because the primary customers of the technology regard those prices as pocket change?CaedenV - Monday, November 24, 2014 - link
10gig is expensive because it is impractical. Not that it is unnecessary, because there are plenty of uses for it (could certainly use it in my home), but just impractical over copper. 10meg was fine, and 100 meg was considered pushing copper to limits. 1gig was considered impossible just over 10 years ago, but by upping the power made it work. To get 10gig to work they took that approach to an extreme. You need PCIe lanes, and lots of power to get it to work, and that is just not going to happen on consumer equipment.But we do need a 10+gig solution for home networking. It needs to be affordable, easy to deploy, and low enough power to work with passive cooling in everything from a laptop to a server. I don't know what the answer is, but it certainly isn't going to be twisted pair copper, and optical has its own issues for in-home deployment... no good answers here.
DanNeely - Monday, November 24, 2014 - link
IIRC reading somwhere in the last year that we were one or two process shrinks away from crossing a threshold that should result in much cheaper and lower power 10GBE chips.azazel1024 - Monday, November 24, 2014 - link
Correct. GbE experienced the same thing, though deployment WAS faster, relatively speaking.A dual port solution has a TDP of around 14w for 10GbE right now. 15 years ago, 1GbE had pretty close to the same ~7w per port TDP and required a beefy heatsink or active cooling (a lot of dual/quad port GbE cards neede active cooling until the mid/late 2000's).
10GbE IS coming (to "the home"). Just slowly. I doubt we are 2 years out, but I'd be suprised if we were 5 years out from it being pretty standard on mid/high end motherboards, as well as affordable switches and NICs (by afforable I mean less than $50 per port).
Death666Angel - Monday, November 24, 2014 - link
Personally, I look forward to 1Gb being the norm for all router / modem ports. There are still far too many 100Mbit solutions for the bundled stuff you get from your ISP and even moderately priced routers.Flunk - Monday, November 24, 2014 - link
I don't think that will change too much. The bundled solutions from ISPs are designed to deliver the internet speed you're paying for and no more. If they can save a dollar by only using 100Mbit Ethernet ports they will. You have to buy your own equipment if you want anything good.DCide - Monday, November 24, 2014 - link
There are a lot of ISPs offering over 100Mbps speeds now, with some in fact already offering 100Mbps+ even on their mid-level tiers! So your argument isn't going to hold up for long.BedfordTim - Monday, November 24, 2014 - link
Sadly they are also offering sub 1MBps for a lot of the UK. Some areas get fibre to the cabinet, but plenty of others don't even get a cabinet.JeffFlanagan - Monday, November 24, 2014 - link
Most ISPs don't deliver data faster then 100Mbit, so it would be wasteful to support higher speeds. Nothing is stopping you from connecting a 1Tb switch to your cable-modem like the rest of us do.Pork@III - Monday, November 24, 2014 - link
Most ISP's in USA and some other countries are both very greedy and very skinflint like Apple. Therefore client access that is both one of the most expensive and slowest and / or restricted by volume in the world.DanNeely - Monday, November 24, 2014 - link
Or buy a modem instead of renting it. A month ago I upgraded from from 15MB docsis 2 to 75MB docsis 3 cable. My shiny new Moto Surfboard has a gigabit port (needed to support its max 120/133mbps data rate). I just need to replace my old 100mb/N router now; turns out it bottlenecks at ~50mbps. That hasn't been an issue for internal traffic, all my wired traffic runs over a gigabit bit switch (wanted more than 4 ports and at the time I bought it there was still a significant price premium for gigabit routers), but replacing it's going to be my next medium sized tech purchase. (I was hoping ac routers would get cheaper first; but can't see buying a gigabit-N model now.)DCide - Monday, November 24, 2014 - link
Not really true Jeff - AT&T is the only one of the top 5 ISPs not offering 100Mbps+ service now (unless you count their very limited availability of Gigabit service). Fast ethernet tops out around 80 Mbps (or maybe 90 if you're fortunate) in real-world usage. An ISP advertizing 100Mbps service, such as Cox, typically delivers slightly over 120Mbps. So 100Base-T isn't "going to become a bottleneck" for ISPs, it already is.Hydrocron - Monday, November 24, 2014 - link
Almost every modem or gateway that my company is putting in to the field along with most of our competitors all support Gb on the LAN ports and most of them have Gb on the WAN port as well. Even if the higher speed offerings are not available in your neighborhood you can still take advantage of the Gb LAN ports for in home wiring.The next big bottleneck that you will see change over the next year or so will be the built in wifi migrating from single channel 2.4ghz 802.11n to 2.4 and 5 ghz MIMO 802.11ac now that the chipsets have dropped in price and are spec'd for our future Gateways and modems.
Ammaross - Wednesday, November 26, 2014 - link
The problem isn't consumer equipment or die-shrink, power requirements, etc. It's PCIe lanes. Consumer-class Intel chips have very few PCIe lanes left over once you give x16 to a video card. Give consumer-class chips the 40 lanes of the 2011-socket chips and we'll have enough internal bandwidth (that's PCIe lane to CPU/mem bandwidth) to make 10gig ethernet useful.DanNeely - Wednesday, November 26, 2014 - link
Skylake will go a long way to mitigating that problem. In addtion to upgrading the chipset to PCIe 3.0, It's adding 10 more flexible IO ports (can be configured as a PCIe lane, Sata port, or USB3 port). That should give enough extra capacity to add a 10GB port to a consumer board without needing a PLX or cutting into GPU resources.name99 - Monday, November 24, 2014 - link
Your overall theme is correct, the details are not.802.3ab (gig ethernet over twisted pair, ie what most of us care about) was ratified in 1999, and Apple was shipping machines with Gig ethernet in 2001. Moreover these machines were laptops --- gigE did not require outrageous amounts of power, but it DID require what a few years earlier might have been considered outrageous amounts of signal processing, unlike the fairly "obvious" (hah!) signal processing required for 100M twisted pair ethernet.
Daniel Egger - Monday, November 24, 2014 - link
Yes, it is impractical but not really expensive if you can do without a switch or need only few ports and can go SFP+. Just not sure why they decided to integrate 10GBase-T PHYs instead of SFP+ ports and be done with it; allows for a ton more flexbility and is much cheaper.The problem is really that it's *very* hard to really use the 10GBit/s. While using 1Gbit/s is no problem anymore properly saturating a 10GBit/s still tricky; we tried that with our shiny new server hardware and failed to get more than with bonding so we decided not fork out a big chunk of money for a switch with enough SFP+ ports.
DCide - Monday, November 24, 2014 - link
Daniel, I would think copying files between two PCs with SSDs should be enough to make 10GBase-T worthwhile; besides, doesn't the "bonding" you're using still limit a single connection to the 1GBit speed? So when you say "*very* hard to really use" aren't you assuming mainstream usage models? I ask because you obviously have experience with it, but it sounds like it's within a certain assumed context. I also wonder why it's necessary to saturate the network in order to believe you're getting the value out of it; I'm accustomed to environments where you want to avoid saturating most computer resources.The bonding may be more cost effective and practical for a typical corporate data center, but for copying 4K video files around I'd rather have 10GBase-T now. And obviously this need will only grow as time goes on.
Daniel Egger - Tuesday, November 25, 2014 - link
> Daniel, I would think copying files between two PCs with SSDs should be enough to make 10GBase-T worthwhileNope, because the underlying protocols (NFS, SMB...) down to TCP/UDP and are out of the box not prepared to push data that fast.
> besides, doesn't the "bonding" you're using still limit a single connection to the 1GBit speed?
Coincidentally it does here but that's not necessarily the case. But again the point is: it's more than unlikely to saturate a 1000Base-T let alone a 10GBase-T link with a single connection except for synthetical benchmarks.
> So when you say "*very* hard to really use" aren't you assuming mainstream usage models?
I guess you could call our usage mainstream however most users would probably disagree at a hardware value of around 15k€. ;) The use case is a multi-server VM hosting (only for company use) setup, we do have 3 2-socket Xeon servers and a NAS. Most of the images used by the VMs are on the NAS served by NFS. 2 of the servers do have 10GBase capable SFP+ ports so just for tests we connected them back to back and ran some benchmarks (exporting storage on an SSD to the other server for instance), tweaked a little and reran them but the results we're not decent enough to warrant a multi k € upgrade of the switch and more importantly the NAS which can not even saturate the 1000Base-T links.
> I also wonder why it's necessary to saturate the network in order to believe you're getting the value out of it
Because the only important KPI is the QoS. Improving the bandwidth without having a bottleneck (as indicated by the saturation) does not result in a better QoS, so basically it'd be wasted money and no one likes that. ;)
> The bonding may be more cost effective and practical for a typical corporate data center, but for copying 4K video files around I'd rather have 10GBase-T
Chances are you will be *really* disappointed by the performance you might get out of that. If you want to copy files you'd be much better off looking at a eSATA or Thunderbolt solution...
DCide - Tuesday, November 25, 2014 - link
Thanks for being so specific, Daniel. It all makes sense within your environment.However, I must say that this definitely doesn't apply to all environments. eSATA and Thunderbolt may be good for something like DAS, but tricky or impossible to use for peer to peer data transfer, and no good if there's any distance between them. Ethernet infrastructure is already well established down to the OS level, meaning virtually any two devices with Ethernet ports can generally share data and communicate in any way necessary, right out of the box.
I wouldn't be shocked if you told me 10GBase-T currently tops out around 3-4Gbps in many real-world implementations, because that was the case with Gigabit Ethernet in earlier versions of Windows, for example (before Windows 7, I believe). But then we have to start talking about which OS, hardware, and drivers are involved, because I wasn't seeing the same problem in other OSes at the time. I think a lot of the HW/SW you were using would be prime candidates for non-optimal implementations of 10GbE, or be subject to bottlenecks elsewhere. And I have no doubt your solution was good. But this doesn't mean it's *very* hard to take advantage of the bandwidth, when a simple file copy will do it.
We've been through this a few times already with Ethernet, Fast Ethernet, and then GbE. With Ethernet, most implementations ran around 300KB/s until 3COM came out with their Etherlink III. Suddenly I was seeing around 900KB/s+, or roughly 80-90% of the theoretical maximum. I saw a similar pattern repeated each time, with each new Ethernet standard starting out performing at only 30-40%, then moving up to perhaps 60-70%, and eventually landing at 80-90% of the maximum. So I'm making a rather educated guess that if you use the right OS you can get at least 6-7Gbps out of the ASRock motherboard when it's released, using a very real-world (not synthetic) test of copying files using the OS' copy command. This will make 10GbE very useful in some real-world situations right now.
Daniel Egger - Wednesday, November 26, 2014 - link
> But this doesn't mean it's *very* hard to take advantage of the bandwidth, when a simple file copy will do it.Except that it doesn't. For 1000Base-T you already have to work a bit to get the most of out of a single GB/s link especially if you only have a single client, for 10GBase-T it's impossible. But don't take my word for it, you can read it right on Anandtech...
http://www.anandtech.com/show/7608/netgear-readyna...
Or you can read the case study from Intel:
http://download.intel.com/support/network/sb/fedex...
And those guys definitely know what they're doing...
DanNeely - Thursday, November 27, 2014 - link
The testing setup page of the article mentioned that they got a 10GB switch for running the test network; but there was no mention of getting any 10GB cards to run single client tests for. Looking at where the tests all topped out at; I'm almost certain that the bottlenecks were 1GB ethernet links that each client was running on. Since Intel recently launched its new 10GB cards; maybe they can be convinced to donate a few for review and permanent inclusion in the testbed.DCide - Wednesday, December 3, 2014 - link
I don't quite understand your reasoning, Daniel. Two current Windows machines connected through a basic gigabit switch is about as simple and easy as it gets. Plug and play 100MB/s+. Doesn't even cost very much!No need to try to interpret someone else's tests when you can simply try it for yourself! Once you've established your baseline - that it achieves its full potential (80%+) in a rudimentary configuration, you can proceed to figure out how to avoid bottlenecks and achieve similar performance in whatever complex target configuration you desire.
But don't undermine the basic technology as if it doesn't reach its potential, when in fact it does. If you can't achieve similar results your infrastructure is dragging it down.
Samus - Monday, November 24, 2014 - link
I agree that 10 gigabit is really impractical over copper, which is why you just pickup an SFP NIC and a switch with SFP uplink. If your application isn't for raw downstream throughput (like 10 gigabit backbone with a few dozen 1 gigabit clients) then you can do point-to-point 10-gigabit with SFP NIC's and media converters (which are an added cost.)Most 10 gigabit networks I've built are for the former application. One was for imaging machines at a computer recycling company where downstream throughput was key (I did dual 10-gigabit uplinks from the imaging server to dual 48-port layer 2 switches) and the other was for an office that had a demanding SQL database from 25 or so simultaneous connections, so a single 10 gigabit uplink to a 26 port UNMANAGED gigabit switch was adequate.
Most of the time a teamed gigabit NIC with auto-failover is adequate for networks of <50 nodes.
eanazag - Monday, November 24, 2014 - link
Both. They have had a hard time producing 10G chipsets that use a reasonable amount of energy. They've been plagued with heat and electricity issues in both the client side and switch side. It has taken forever to get base T switches. Fiber ones have been around for a while.The other side is that they can get away with charging a lot, so they do. Competing technologies are fibre channel and infiniband - go price those out with a switch.
Anandtech shown a few years ago that performance for one port 10G connection was better than 4 1Gb ports anyway you looked at it.
Railgun - Monday, November 24, 2014 - link
Supply and demand. Plain and simple. Not a lot outside of the enterprise/datacenter requiring it, and the enterprise vendors know what they can charge for it. Back in the SOHO market, not many care to provide it so...the market/supply is small.Kjella - Monday, November 24, 2014 - link
Because 1000 Mbit/s is way bigger than raw BluRay at 54 Mbit/s, so you can do 20 HDTV streams or 4 UHD streams over GigE if your drives can keep up? Because you need two wired machines in close proximity, since what's the point of a 10GigE server if all that connects is laptops and tablets and smartphones and mostly by WiFi? The only place I know it's been used is between a server and a SAN, topping out at ~700 MB/s (5.4 GBit/s) actual performance it's quite neat but for a very specific niche in number crunching.Samus - Monday, November 24, 2014 - link
One could alternatively buy a PCIe x4 10-gigabit dual-port SFP card for <$100 (eBay) and a switch with 10-gigabit SFP uplink for the same price. You'd even get the added benefit of layer 2/3 management.But I like the idea of having it all integrated, too, but you're right, when it comes to integration, these OEM's seem to "pull a Samsung" and throw it all in plus the kitchen sink. It's overkill.
I just want X99 to come to ITX already. They neglected it with X79 even though its entirely possible, because overclocking is pretty much out of the question and that's 'suicide' to make something non-overclock I guess...
Ian Cutress - Tuesday, November 25, 2014 - link
X99 + ITX won't happen any time soon. The size of the socket and DRAM leaves little space for anything else, and out of the CPU PCIe lanes you'll be able to use 16 on a single PCIe slot. Unless there is significant demand, motherboard manufacturers see that as a waste of resources and users won't want to buy a 40 PCIe lane CPU and not be able to use most of them. I have suggested cutting down to dual channel memory to save space for a chipset and some controllers, but the same argument: users who pay for quad channel support won't want dual channel. Then find space for the power delivery, SATA ports etc. As said, if there is *demand* (1000s of units, not 10s) then there might be a compromise.Samus - Tuesday, November 25, 2014 - link
I have a very light EE background but if they move the vrm array to a vertical riser and use sodimms there is plenty of room for a few integrated components. But you are right there is physiclally room for only a single PCIe 16x slot, but many people don't care about 40 lanes. The raw throughput of the memory bus is my main attraction. It's annoying that the x58 chipset has more raw memory bandwidth than all the mainstream platforms today...just because it was triple channel.dabotsonline - Thursday, November 27, 2014 - link
"Regular readers of my twitter feed might have noticed that over the past 12/24 months, I lamented the lack of 10 gigabit Ethernet connectors on any motherboard. My particular gripe was the lack of 10GBase-T, the standard which can be easily integrated into the home."Ian, haven't Supermicro sold a few X540-based motherboards for quite a while now?
Pork@III - Monday, November 24, 2014 - link
Hmm, impractical? We make 100GbE solutions(100 Gb/s on an single optical line) in mass production from 2012. Many ISPs started recharge their network with a new tech hardware. Don't make lie for that practical or impractical. If we all follow of your way of thinking, humankind would not go out of the Stone Age.Lord of the Bored - Monday, November 24, 2014 - link
I... did you read more than one sentence before replying? Because he's completely right and not just being stubborn and hidebound. Copper wire is a significant bottleneck these days.Optical is too delicate for home use, even ignoring the cost of decent fiber.
"TOSLINK proves it's doable cheap and reliably" I imagine you might say. But TOSLINK is currently limited to a tad over a hundred megabit, and is still quite delicate next to it's copper brethren. Home users don't think twice about tying a knot in a cord to take up excess slack or keep it from wandering off. That breaks even the plastic fibers in TOSLINK immediately, to say nothing of a quality GLASS fiber. And god help the poor soul who runs his chair over a cable.
That's part of why TOSLINK never saw widespread use, and we run our SPDIF over copper through an HDMI jack these days.
The equipment you're talking about is VERY expensive, and it's not just because it's business hardware.
Am I saying there will be no more advances in networking? No, of course not.
Am I saying that the reign of copper is coming to an end? You better believe it.
Am I saying optical fiber is too delicate to be trusted to the home users that apparently run cable with a pair of hamhocks in boxing gloves instead of their hands? Ayup.
Am I saying we'll see 10-gigabit radios in the home before we see 10-Gb cables in the home? God, I hope not.
Pork@III - Monday, November 24, 2014 - link
Asrock thinking for a near future. Not need copper cable to the street swich. Copper will be connection in very short line between optical to copper IO adapter and PC. In distances of decimal inches copper cables have no problem with 10Gb/s speed.Romeen - Monday, November 24, 2014 - link
Both Aquantia and BRCM have been offering 10GE on more than 100m of copper cable for over 5yrs now. The issue was power, but the new generation of 10GE PHYs in 28nm tech will bring PHY power down from the current ~7W/port to almost half. Intel will have the 10GE NICs with Aquantia 28nm PHYs in 2015, less than a year away.DanNeely - Monday, November 24, 2014 - link
Assuming you're talking about Intel Fortville, it's already launched, but they're quoting the same 7W TDP for 2x 10gb, 4x 10gb/1x40, and 4x10/40gb configurations. Unless they're thermal binning, or most of the power is being consumed by components that aren't sensitive to the data rate the equal TDP for all 3 parts seems odd to me.http://ark.intel.com/compare/82945,82946,82944
DanNeely - Monday, November 24, 2014 - link
"In order to deal with the heat from the Intel X540-BT2 chip being used, the extended XXL heatsink is connected to the two other heatsinks on board, with the final chipset heatsink using an active fan."Looking at the pictures, I can't see any connection between the two sinks around the socket and the one at the bottom of the board with a fan.
Ian Cutress - Monday, November 24, 2014 - link
You're right, I thought I saw something there.Jeff7181 - Monday, November 24, 2014 - link
I'm curious, what is the use case for 10GbE at home? I have a server at home with a 6TB RAID5 array and it's no big deal if it takes 4 minutes to transfer a freshly ripped 40 GB mkv file over 1GbE.Is it iSCSI? Who is using iSCSI at home anyway?
As a workstation board for a professional with an iSCSI storage array, this makes some sense... but for even the odd-ball "prosumer" I just don't see the lack of 10GbE being as big a shortcoming as you make it sound. Especially when it doubles the cost of an otherwise very competent workstation motherboard.
alacard - Monday, November 24, 2014 - link
I can't stand this argument. Hey Jeff just because you have no use for, or can't see why any would have a need for faster technology, doesn't mean no one else does. After all, 640K should be enough for anyone, am i right?azazel1024 - Monday, November 24, 2014 - link
Define no use? I use a 2x1GbE with Windows 8.1 and SMB Multichannel to get me 2GbE of true bandwidth. I am changing around my storage, so I am heavily storage bottle necked by my single 3TB 7200rpm drive, but in a couple of months I'll be a 2x3TB RAID0 in my server and desktop instead of the mishmash of 1x3TB and 2x1TB I have going on right now (it used to be 2x2 and 2x1, but one of the 2TB disks started getting a little flakey).Once I am done with the upgrade, I should be able to easily push 300MB/sec over a network pipe sufficient for it. I don't NEED to do that, but I certainly wouldn't mind being able to utilize it. SSD is dropping in price faster than HDDs, and seem to have been for awhile. Price for storage parity is no where close yet, but still, it MAY be someday. Or it might be that most people have no issues with SSD storage, even for "bulk things" like video collections.
My 1.7TiB of data took ~3hrs to transfer over because of disk bound limitations. Once I build it out to a 2x3TB array, it should only be around 2hrs to transfer things to it (and I'll try setting up a 3rd GbE link using the onboard NICs in the machines and running a temp 100ft network cable, just to see if the onboard NICs will play well with the Intel Gigabit NICs I have in the machines...because network porn).
I absolutely do not need 10GbE. I can however desire it and I can see really wanting it once my storage is even faster. I can grok a situation in a few years where my HDDs are getting long in the tooth where it might make sense to pay the resonable premium and just get SSDs for my bulk storage. THEN 10GbE would make a lot of sense to take advantage of the speed provided.
Plus, 10GbE can provide advantages like running remote storage as local storage, especially with iSCSI, as the significantly reduced latency of 10GbE is really needed in some ways to not make that painful.
Pork@III - Monday, November 24, 2014 - link
You is a new William Gates, with a new 640K urban legeng. You is right for itself in this moment. But you make a big mistake for a near future and that we all need when future coming to us.Glock24 - Monday, November 24, 2014 - link
Is Asrock any good as a workstation motherboard? Where I live you can only find Asrock, MSI or Intel, and Asrock products sold here are really crap.For my builds I always import all components and go with Gigabyte for the mobo.
Vatharian - Wednesday, November 26, 2014 - link
Oh my gods, no. Gigabyte for server/workstation? Gods, no, please. I worked on Gigabyte mobo repair service for a couple of years, we had also some drop-ins of other brands. After this I'm genuinely shocked, that Gigabyte still even exists, so do MSI. All of their products, except MSI big bang fusion and 9) are complete and utter crap, from layout (chipset heatsink mounting screw blocking pci-express slot?) through electrical work (no galvanic separation between sound card and usb), to pcb layers coming apart, especially in those with more copper. MSI ofter has misplaced components (I resoldered thousands of mosfets and other small smds just by 1mm, and everything starts working), and absolutely no onboard current spike protection. ASRock was always making ingenious, but often low quality motherboards (caps! MOSFETs!), but everything else was often decent - they have skilled and brave engineers in-house. Recently, when they switched manufacturing, ASRock became surprisingly solid. I find their workstation and Extreme6+ boards astonishingly good quality, on par with ASUS (which dropped in quality recently).mars2k - Monday, November 24, 2014 - link
Need a fast San rig to run a virtualized environment. Ok cool ….already shopping 10Gbe for my home. Lots of decent new 10Gbe nics on EBay for cheaper than you think however these Intel will do if the price is right. Already like Asrock, kudos for the M .2 support along with PLX 8747 and tons of PCIe slots.LSI is great as well, you can find new re-branded LSI raid cards all over the place …way cheap. Why would I pay for an on-board LSI solution that doesn't support Raid 5? Cut the board price by a third (more please) and keep the LSI chips.
name99 - Monday, November 24, 2014 - link
A partial (admittedly less than ideal) step in this direction is to provide multi-path TCP in the OS. In the common situations of interest one is running multiple of the same OS around the house/office, so both ends will support mTCP. One can then aggregate a gigE connection, whatever your WiFi offers, and one or two or three USB (and/or thunderbolt) ethernet adaptors.Yeah, this doesn't give us 10G; but it can easily and cheaply give us 2 or 3G, which is 2 or 3x faster than what we have today...
[Note that this is aggregation at the TCP level. Aggregation at the ethernet level already exists, certainly in OSX, I assume in Linux, but it's finicky and requires special hardware. Working at the TCP level, mTCP should just aggregate automatically and cleanly over all the network ports you may have.]
Which raises the question of: what's the status of mTCP?
- supposedly we had a trial experimental version in iOS7 (which was only used for Siri)
- I saw a few reports that it was part of the Yosemite betas, but it's not there as of 10.10.1
- I believe it's part of Linux (but I've no idea what that means in terms of for which distros it is by default available and switched on)
- Windows, as usual, seems late to the party; I've not even heard rumors about this being targeted as part of Win 10.
Anyone have updates on any of these?
azazel1024 - Monday, November 24, 2014 - link
I think you are late to the party. Windows has multipath as of Windows 8/Server 2012. Windows 8/8.1 and server 2012 support it in the form of SMB Multichannel. Read my comment further up. I've had it running for ~3yrs now to get 235MB/sec from my server to my desktop and back using a pair of Intel Gigabit CT NICs.It is NOT part of Linux last I checked, which was admittedly ~8-10 months ago. No NAS that I am aware of support multipath/channel for increasing network storage performance.
name99 - Monday, November 24, 2014 - link
Real TCP multipath, not SMB multipath...SMB is a much easier problem than TCP, though less general. I agree that it's a good interim solution --- it solves 90% of people's problems with 10% of the work. Apple, however, have committed to TCP aggregation and the general solution, so, after looking at AFP (and I assume also SMB) multipath, they decided to focus all efforts on TCP multipath.
I don't want to denigrate what MS has done --- multipath SMB is useful. But let's also not pretend that it is the same thing that I am talking about.
As for Linux: http://www.multipath-tcp.org
I don't care about NAS, because I don't use one, but you are right that that would be a significant real-world issue, and will probably be messily resolved slowly over the next five years.
The case I care about is multiple macs to macs, ideally full TCP though I could live with just AFP or SMB; and this is what I hoped we'd get with Yosemite.
WatcherCK - Monday, November 24, 2014 - link
Or the new Intel 40Gbit Fortville chipset to really make that home network zing along!http://www.servethehome.com/40gbe-intel-fortville-...