×
all 10 comments

[–]cdnpaul 18 points19 points  (7 children)

Common Problem when you spare no expense. I have networks that are 15 years old, 2000-3000 users. Where they bought to absolute top of the line, and they aren't even getting CPU hits above 15% on their cores and edge switches today. So it pains them to swap it out.

As for your issue. off the top of my head there are two companies that can do what you need. SPIRENT https://www.spirent.com. and IXIA. https://www.ixiacom.com/products/ixnetwork

This stuff isn't cheap.

but there are open source traffic generators out there. you will just experiment and find the best one.

Another nice tool is NORTEL Multicast Hammer. or MCHAMMER. you just need MC HAMMER installed on 2 or 6 Laptops and you can bring a network to its knees. You can still find MCHAMMER on download sites in the wild.

[–]Sunstealer73 3 points4 points  (5 children)

I work in K-12 and see school systems putting in 40G backbones and have no idea why. The school system I work for is much bigger than them and is just 1G backbones everywhere but the data center. We do collapsed core with each switch having its own links directly back to the core. We run OpManager and monitor every interface. The only time we come close to maxing 1G is when imaging multiple labs at once.

[–]cmPLX_FL[S] 6 points7 points  (0 children)

I use to work in K-12 for a school district with 40ish schools. Before I left, they had started doing 1-to-1 deployments of chromebooks for students. Before that, over the summer or whenever needed, we were replacing the entire school infrastructure with at least 10G backbones back to the Core Network.

Plus, all the testing that takes place in FL is all online now, so that was another reason to up the backbones.

Plus Plus, streaming of educational videos.

(I damn standardized online testing to hell and wish it to burn a fiery death)

[–]RememberCitadel 3 points4 points  (0 children)

We commonly do that for the large spikes, not the usual throughput.

When you have 10k kids all logging into chromebooks and downloading their profiles at once. They can easily pull 3-5 Mbps per chromebook during that stage. Google caching servers located on the network can help alleviate that feom hitting the internet, but still need that backbone.

[–]passw0rd_ 1 point2 points  (0 children)

I've done 40G infrastructure for K-12 before. A lot of the schools I've worked with do a lot more than 1G of traffic to the Internet. When I worked with different county networks, the biggest bandwidth hogs were the public schools. Some of the counties provides all of their students with tablets/laptops, and a lot of their work, including tests, are done online.

[–]jimothyjones 0 points1 point  (0 children)

Well. School shootings mean way more video equipment. VDI and chromebooks and the new notion that everyones Iphone needs to be on wifi. Which is weird because I was suspended from high school for bringing a pager to school during the crack epidemic years. So with this mindset, only drug kingpins would need wi-fi access for their smartphone while at school

[–]kungfumastah 0 points1 point  (0 children)

Few reasons for this.

First, 1:1 & BYOD. This is causing explosive bandwidth & capacity issues at the edge, and hence why technologies like MU-MIMO and NBaseT were invented. K-12 is often cited as the worst possible condition for Wifi in general, due to the density of the clients and the transient nature of the users (read: roaming hell).

Second, funding. ERate is a federal program that guarantees money to K-12's specific to network connectivity (LAN, WAN, I1, burying dark fiber, etc. etc.). The ERate program has been funded very well for the last few years, so many K-12's have opted to "shoot the moon" and over-build since they have the chance to get a good portion of their money returned by the federal government. With new administration, the future of this program is questionable at best, so many school districts are maximizing their purchases now while it's there.

Third, the dual nature of bandwidth. Keep in mind that bandwidth is both a measure of volume and throughput. When people say that "We barely use x% of our bandwidth, why would we increase?", they are referring to the volume of the segment.

A 40Gb link will improve end-user application performance by offering higher throughput in flight. This is especially true if you have a 40Gb Data Center environment (which is pretty cheap & common these days). Student/Education Management Systems are now the lifeblood of education (higher and lower). If they are hosted in the school's data center, end-user performance can be tremendous using 802.11ac W2 gear on a 40Gb/s backbone.

IMO the engineers out there that neglect the user-and-application-centric point of view often fail to meet the organizations objections. For the most part, end-user experience is generally enhanced by "throwing bandwidth at the problem", whether right or wrong.

[–]EnrageCertified Router Jockey 1 point2 points  (0 children)

TREX is where it is at today for packet blasting. Commodity server + TREX can equal 200G of traffic.

[–]amaged73 1 point2 points  (1 child)

Have you seen any problems with these numbers ?

[–]cmPLX_FL[S] 1 point2 points  (0 children)

Each of the endpoints are in a stack. 40Gbps inter-connectivity within the stack.

1.92 Tbps is what the switch is rated to handle

No issues so far other than the older SAN having 98% read usage.