Inside the FASTEST New 800GbE 64-port Switch

73,380
0
Published 2024-07-22
We get inside the 64-port 800GbE switch to see how these new behemoths of networking are being built. This is a new 51.2T switch based on the Marvell Teralynx 10 chip. It can alternatively handle connecting 512 links at 100GbE each. The 51.2Tbps switch generation is going to be a big deal as 2025 AI clusters use these switches.

Thank you to Marvell for giving us access and sponsoring this so we could travel and make the video.

STH Main Site Article: www.servethehome.com/inside-a-marvell-teralynx-10-…
STH Top 5 Weekly Newsletter: eepurl.com/dryM09

----------------------------------------------------------------------
Become a STH YT Member and Support Us
----------------------------------------------------------------------
Join STH YouTube membership to support the channel: youtube.com/channel/UCv6J_jJa8GJqFwQNgNrMuww/join

----------------------------------------------------------------------
Where to Find STH
----------------------------------------------------------------------
STH Forums: forums.servethehome.com/
Follow on Twitter: twitter.com/ServeTheHome
Follow on LinkedIn: www.linkedin.com/company/servethehome-com/
Follow on Facebook: www.facebook.com/ServeTheHome/
Follow on Instagram: www.instagram.com/servethehome/

----------------------------------------------------------------------
Other STH Content Mentioned in this Video
----------------------------------------------------------------------
- Inside the Innovium Teralynx 7 32-port 400GbE Switch:    • Inside a 400GbE 32-port Teralynx 7 Hy...  
- Inside a 64-port 400GbE switch:    • FASTEST Server Networking 64-Port 400...  
- Inside a Dell 32-port 100GbE switch:    • Why the Dell S5232F-ON is a Vastly Be...  
- Inside an Arista 32-port 100GbE switch:    • Inside an Arista 32x 100GbE switch th...  
- Cheap Marvell Powered MikroTik 100GbE switch:    • Ultimate 100GbE Homelab and SMB Switc...  
- Marvell buys Innovium:

----------------------------------------------------------------------
Timestamps
----------------------------------------------------------------------
00:00 Introduction
02:01 Getting inside the Marvell Teralynx 10
14:30 Wrap-up

All Comments (21)
  • 800Gbit/s is pretty fast. But nothing is faster than Patrick when he speaks about crazy fast technology 😂
  • @james1234168
    This thing is a monster! Being able to break out into 512 x 100GBe ports is mind blowing!
  • @huy1k995
    9:25 HOW MANY LAYERS IS THAT PCB!? That's like HALF A RJ45 port height in PCB depth. I've never seen one that thick before.
  • @ClockDev
    Looking at the thickness of the board on 09:03, and looking at the vast amount of signals the single chip needs just for the front ports, this has to be one of the most complex PCB ever made (if not the most). That's an incredible amount of high speed signal traces that need to be finely adjusted,one by one just to have good timing constraints and good impedance matching between the IC and the transceivers
  • @computersales
    It's crazy this 2U switch can replace 16U of 100Gb switches. Hate to imagine what 8 cables coming out each port looks like though.
  • @mr_jarble
    I used to brag about my home lab with 100g switching.... I am so glad that you are able to bring us amazing content like this it pleases the hardware nerd that I am greatly <3
  • @littlenewton6
    My homelab need this switch! I need to use it to watch videos in my Jellyfin.
  • Marvell is coming back with a vengeance with this one. The density in this thing is insane
  • @k3rsh0k
    I need this for my homelab. It’s non-negotiable. Instagram won’t work without it.
  • @itsnebulous8507
    I appreciate your detailed look at high-end networking equipment—it's something we network engineers don’t often see. For future content, could we delve into the more technical aspects that are critical for those of us in the industry? Some things I wanted to see mentioned, specifically as a networking professional in the HPC space: What type of ASICs and RPs are in a piece of equipment along with their L2/L3 forwarding rates for determining subscription. PPS capability would be helpful too. You went into radix, which is very helpful for considerations about network topology like fat tree, and I do appreciate that you briefly went over spine/leaf in some diagrams. A bit about the CAM/TCAM would be helpful for understanding RIB and FIB sizing and ACL capability. Covering the different redundancy modes the switches support would be helpful as well. Such as whether they handle active/active (MLAG) or active/passive (stacking) scenarios would tell us a lot about their control plane function from a redundancy perspective. In line with that I'd like to see some exploration into how L2/L3 multipathing is implemented. I'd also like to see some breakdown of what is meant by 'large buffers' and the specific Quality of Service methodologies implemented (inbound shaping, colored policing, DSCP tagging support, WRED or ECN). I'd also like to see some mention of RDMA/RCoE support for HPC centric switching platforms, because it's very important. Maybe too detail oriented... But network engineers don't typically get to read anything but whitepapers, datasheets, and slide decks, so the format is very captivating.
  • @mx338
    The RJ45 cables connected for management look very cute.
  • @thetux0815
    Where's the affiliate link to buy one? 😂
  • @ewenchan1239
    And here I am, sitting at home, with my piddly 36-port 100 Gbps Infiniband switch, that's sitting in my basement.
  • @hquest
    I mean, under 3000W power consumption is not absurd. All the Cisco 6500-E chassis we ever purchased came with minimum dual 4500W PSUs. But bottom line, great to see competition against Broadcom and their almost monopoly in this segment.
  • @AWildLeon
    My 10Gbit Network seems like Peanuts against this 😂
  • @marsrocket
    Thinking of getting one of these for my pi cluster.