Press J to jump to the feed. Press question mark to learn the rest of the keyboard shortcuts
Coming soon
4 points · 1 hour ago

What are your physical core counts? What processors?

You should read up on NUMA spanning.

If you have, for example, 2x 12-core hyperthreaded CPUs, you should never really allocate more than about 8 cores to 1 VM.

with 2x 12-core Hyperthreaded CPUs, you have 2x 24-logical-thread NUMA "zones"; If your VMs have no more than 8 cores per VM, then they only have to wait until either processor is less than 66% utilized, which should be almost always. As long as you don't have 5 VMs pinning their processor, you shouldn't have to wait for context switching.

You don't have a normal email address?

see more
2 points · 14 hours ago

my email address is

[my reddit username] @ [my reddit username] . net

... if anyone in IT thinks that running your own private domain name isn't appropriate, then I don't want to work for them.

Tripplite. Just don't expect much from the SNMPWEBCARD addon module. It... functions. and that's about it.

4 points · 1 day ago

probably black mold.

21 points · 1 day ago

Get a server (or a bunch of servers).

If you get 1 server, it'll need at least 128GB ram, 1TB of SSD, and 24+ CPU cores that support extended page tables and nested virtualization, and 2 network adapters. You'll also need a switch that supports LACP.

If you can't get those specs in 1 server, you'll need a couple different smaller servers.

1) Install windows server 2016 enterprise on a bare metal machine. Call this machine "BareMetal". (or call it whatever you want, but for naming in this list, call it BareMetal. )

2) On BareMetal configure Bitlocker to encrypt the local storage. Configure NTP syncronization to an external time source, a hostname, IP address information.

3) On BareMetal, configure LACP Teaming of your 2 network adapters. You'll need to team/bond the 2 network adapters in the switch then turn on teaming in the OS and create a Net-LBFOTeam with the appropriate collection of network adapters.

4) On BareMetal, install and configure Hyper-V. Create a virtual switch that permits MacAddress spoofing (needed for nested virtualization) and bind that virtual switch to your Net-LFBOTeam virtual network adapter.

5) On BareMetal, create a VM called "GoldenImage2016". Give the VM 2 CPU cores, 4GB ram, 60GB thin-provisioned disk, 1 network adapter. Install windows server 2016 datacenter. Perform any basic configuration you want. Sysprep /oobm the virtual machine, then shut it down. Delete the virtual machine GoldenImage2016, but preserve the VHDs. Set the VHDs as read-only.

6) On BareMetal, create a VM called "GoldenImageWin10". Give the VM 2 CPU cores, 4GB ram, 60GB thin-provisioned disk, 1 network adapter. Install Windows 10 enterprise. Sysprep/oobm,shut down, preserve the VHDs. Set the VHDs as read-only.

7) On BareMetal, create a set of differencing disks that reference the GoldenImage2016 base disk. This will allow most of your LAB VMs to share the base OS disk while being able to make their own overriding changes. From here out out in this list, whenever I say "create a new VM", translate that to "create another differencing disk and create a new VM using the GoldenImage2016 base disk".

8) On BareMetal, create 2 new VMs and give them:

  • Hostname: "DC01 / DC02"
  • IP address: static
  • Install Roles: AD/DC, DNS

9) On DC01 create a new domain in a new forest. Lets call it... asd.lkf. DC01 will become dc01.asd.lkf. Create an active directory restore-mode password, a domain administrator password, and complete setting up your new domain. Create a user account "asdlkf@asd.lkf" with regular user permissions. Create a user account... lets call it asdlkf-ea@asd.lkf. Grant enterprise administrator level permissions to this account. Disable the Administrator@asd.lkf account. You now have "asdlkf" as a regular user account and "asdlkf-ea" as an enterprise administration account.

10) On DC02 set your DNS server to the IP of DC01, then join the domain, then promote DC02 to a domain controller.

11) On DC01, set your primary DNS server to the IP of DC02 and secondary DNS server to the IP of DC01.

12) On DC02, set your primary DNS server to the IP of DC01 and secondary DNS server to the IP of DC02.

13) run dcdiag to make sure everything looks healthy.

14) create a new VM using the Win10 differncing disks called "mgmt.asd.lkf". Install Windows 10 enterprise. Give it a static IP (for now). Join it to the domain, enable remote administration (RDP), and install ALL of the RSAT tools in it. Optionally, you can use a regular laptop for this instead of a VM. This machine (mgmt.asd.lkf) is going to be your primary administrative workstation. It can either be a VM or a laptop/desktop.

15) from here on out in this list, you should strive to perform all of your administration tasks from mgmt.asd.lkf. You should login to mgmt.asd.lkf using the regular user account "asdlkf", not using "asdlkf-ea". When ever you have to use the RSAT tools, you should shift-right-click on the appropriate administration tool and "run as another user..." and then enter your asdlkf-ea credentials so that only that 1 tool is running as administrator, while the rest of your workstation applications and processes are running without elevated credentials.

16) Using mgmt.asd.lkf, use powershell to remotely install DHCP on DC01 and DC02. (these should probably be different VMs in the real world, but it would be wasteful to put these on different VMs just for lab purposes. The world is not built out of RAM). Using the RSAT tools, launch the DHCP administration console on mgmt.asd.lkf and then connect to DC01 and DC02. Create some DHCP scopes with appropriate ip ranges, DNS server IP addresses, and gateway addresses. Use the IP address of your home router for the gateway, for now. use the IP address of DC01 and DC02 for DNS. Initially create this DHCP scope on DC01. Then configure DC01 and DC02 to be a replication pair. Then replicate the scope from DC01 to DC02 using a 60/40% IP address allocation weight. DHCP should work with either VM running.

17) using RSAT tools on mgmt, configure DNS on DC01 and DC02 to use root hint forwarders, or to forward all queries to your ISP's DNS servers. your choice.

18) on mgmt.asd.lkf, use RSAT tools to create a DHCP reservation for mgmt.asd.lkf. Find your mac address and create a DHCP reservation in the scope you created for your chosen static IP address. bonus points if you use powershell instead of GUI.

19) on mgmt.asd.lkf convert your network adapter from static to dhcp. It should get the same address from DHCP from your reservation. bonus points if you use powershell instead of GUI.

20) Pause and reflect. At this point you should have:

  • A hypervisor with differencing disks saving you tons of disk space.
  • A new domain, new forest, a pair of domain controllers with DNS, a pair of DHCP servers replicating between eachother
  • A management workstation with RSAT tools

A long way in, and barely scratching the surface.

12 points · 1 day ago

21) create 2 new VMs called "HV01.asd.lkf and HV02.asd.lkf" on BareMetal. These will be nested hypervisors. (If your hardware doesn't support VT-D or nested virtualization, HV01 and HV02 can be bare metal hosts, but this guide assumes you have created HV01 and HV02 as VMs. ). Google the powershell commands for enabling nested virtualization; you'll need to allow mac-address spoofing and enable the VT-D extensions in the virtual CPU of the 2 VMs. Give these 2 VMs lots of resources; 48GB ram, 80GB disk (thin provisioned), 8 cpu cores, each. Join HV01 and HV02 to the domain, then use the RSAT tools on mgmt.asd.lkf to install hyper-v remotely. Use the RSAT tools to open up hyper-v manager, build a random test VM on each hypervisor, and boot the test VMs to ensure you have nested virtualization setup correctly. If the test VMs boot and can ping out, you are good to continue. Delete the test VMs.

22) using the RSAT tools on mgmt, remotely install failover clustering on HV01 and HV02. Create a new cluster called "HV0C.asd.lkf".

23) On BareMetal, create a network share called "\BareMetal\BaseDisks" which maps to the directory that stores the GoldenImage2016 and GoldenImageWin10 VHDs. Permit $HV01,$HV02 and $System read-only access. You might also want to grant asdlkf-ea read-write access to the share so that when navigating network shares with "Browse..." your user account has permission to see the files to select them. Whats going to happen here is that Hyper-V on HV01 is going to authenticate to \BareMetal with the AD Machine Account $HV01 to access the files there. Since no user account is logged in to HV01 most of the time, it needs to use machine authentication to access those disks. $System will also need access to enumerate file names, etc...

24) Shut down HV01 and HV02. Create a new VHDX on BareMetal called "DiskWitness.vhdx" size 1GB. Attach the VHDX to HV01 and under advanced options select "shared storage" to allow the disk to be connected to multiple VMs. Create a 2nd VHDX called "SharedStorage.vhdx", size 400GB, again with "shared storage" checked off. In HV02's VM settings, attach both DiskWitness and SharedStorage, again, selecting the option to allow multiple VMs to connect to the shared disk.

25) Start HV01 and HV02. From mgmt, use RSAT tools to initialize DiskWitness and SharedStorage. Create a 1GB volume on DiskWitness formatted with NTFS. Create a 400GB volume on SharedStorage formatted with NTFS with the volume name "SharedStorage". Using Failover Cluster Manager on MGMT, create a new cluster called HV0C. designate the 1GB volume as a Disk Cluster Witness. This will form the 3rd vote between "HV01","HV02", and "Witness disk". Using Failover cluster Manager on MGMT, Import the 400GB volume and add it to cluster storage.

*NOTE: At this point, and only at this point, you can rename \HV0C\c$\ClusterShared\volume1 to something different. "Volume1" is a horrible name. Once you start putting things into \volume1, you can't easily rename it. Take this opportunity to rename \HV0C\c$\ClusterShared\SharedStorage.

26) using the RSAT tools on mgmt, create 2 new VMs on HV01 called "FS01.asd.lkf","FS02.asd.lkf". It should be using differencing disks located at C:\ClusterShared\SharedStroage, referencing differencing disks on \BareMetal\BaseDisks.

In addition to the 2 base OS disks for the 2 VMs, create \HV0C\c$\ClusterShared\SharedStrorage\FS0C\witness.vhdx (size 1GB) and \HV0C\c$\ClusterShared\SharedStorage\FS0C\FileShared.vhdx (size 100GB). Attach both of these additional disks to FS01 and FS02 using the "shared storage" option on the VHDX allowing these two disks to both be attached to FS01 and FS02.

Join them to the domain, then use RSAT tools to install FileServer and Failover Clustering roles.

Then use failover cluster manager to create a new cluster called FS0C and set the 1GB disk as the cluster quorum witness, after creating a volume formatted NTFS. Also import the 20GB disk to failover clustering on FS0C and rename \FS0C\c$\ClusterShared\volume1 to \FS0C\c$\ClusterShared\FileShared.

27) using failover cluster manager, create a new failover clustering file server across FS01 and FS02 called \FS0C\files. Setup permissions on the network share to be accessible by users of the domain.

28) Confirm that from mgmt.asd.lkf you can now navigate to \FS0C\files.

30) Use failover cluster manager on mgmt.asd.lkf to import the VMs from step 26 (FS01 and FS02) into failover clustering on HV0C. This will essentially register FS01 and FS02 as highly available objects that can migrate between HV01 and HV02.

31) Live migrate FS02 from HV01 to HV02 using failover cluster manager on mgmt.

stop and reflect.

You now have a clustered file server living on top of 2 nodes that are using shared storage across two hypervisors with common shared storage. We have used a shared VHDX for the shared storage between HV01 and HV02 but this could have easily been an iSCSI NAS, an FC SAN, a Storage Spaces Direct clustered storage array, or any other kind of shared storage implementation. At each level, we have used highly available implementations of file services and hypervisors. You can now live migrate FS01 and FS02 across HV01 and HV02 freely within the membership of HV0C.

If you need to reboot HV01 for mintenance, simply right-click HV01 in failover cluster manager and "drain roles". This will cause all the VMs on HV01 to live migrate to HV02. When complete, you can shutdown HV01, reboot it, change it, whatever, then bring HV01 back online and it'll rejoin the cluster. Then Drain roles on HV02 and all the VMs will move to HV01, etc...

32) use group policy management on mgmt.asd.lkf to create a folder redirection policy which redirects C:\users\$username$\downloads to \FS0C\files\$username$\downloads. create an OU in AD that contains only the mgmt.asd.lkf workstation. Link the GPO to the OU. use "GPUpdate /force" on mgmt.asd.lkf to apply the changes immediately. Confirm that your downloads folder on your local machine is now hosted on the clustered storage.

I'm getting tired so the rest of this list is more point form:

33) Setup routing and remote access services; create a NAT router and redirect all of your VMs' traffic through the NAT router. Setup VPN services so you can connect your laptop to your "corporate environment" through the VPN.

34) Setup a certificate server; get a certificate from lets-encrypt for asd.lkf. create a subordinate CA that issues certificates to all of your domain joined machines. create an AD GPO that makes machines request/install/renew certificates from the CA.

35) Setup a radius server and configure a physical managed switch to perform 802.1x authentication with your laptop. bonus points if it's using certificate based authentication. You can also configure your switch to use AD-based authentication for login to manage the switch; create a security group in AD called "switch management" and delegate access to manage the switch from that group membership.

36) Setup a remote desktop server and a remote desktop gateway. Setup and Configure RemoteFX (this might require a bare-metal hypervisor with specific video cards to work correctly).

37) Setup a print server. Add your local network printer to the print server and supply drivers for windows 10 x64, windows 7 x64, and windows 7 x86. create a GPO that pushes the printer to end devices; link the GPO to the OU that contains mgmt.asd.lkf. reboot and confirm the printer has appeared and you are connected to the print server.

38) Setup a WSUS server (warning: GIANT disk hog. This one you might want to put on a USB external rotary drive). Configure your domain to receive updates through the WSUS server.

39) Install something that would require you expanding the AD Schema (such as Exchange).

40) Install DFS and create some DFS name spaces. This will allow users to have simplified names to work with. Allow users to connect to

\\asd.lkf\files -> \\fs0c.asd.lkf\files
\\asd.lkf\vms -> \\hv0c.asd.lkf\c$\ClusterShared\SharedStorage

41) Install and configure various system-center components:

  • Setup Microsoft SCORCH to automate some tasks for you. Orchestrator is a powerful tool that can automate many tasks.
  • Setup Microsoft SCVMM to manage Hyper-V for you. It can do many different things with Hyper-V or even ESX. It can clone, reconfigure, migrate, convert... etc...
  • Setup Microsoft SCDPM to do backups for you.
  • Setup Microsoft SCCM and learn how to use powershell desired state configuration. Configuration Manager can apply various test conditions and trigger remediation actions or alerts. "Does this server have enough free space? If yes, do nothing. If No, email an admin or open a ticket in System Center Operations Manager. (SCOM).
  • Setup Microsoft SCOM and learn how to setup/use the ticketing system aspects of it.
  • Setup SQL Server. Setup an SQL server cluster with SQL Always-On.

This list should get you started.

I may or may not update it, idk.

3 points · 2 days ago

use differencing disks.

Step 1) make a normal VM.

Step 2) Sysprep and shut down the VM.

Step 3) Make that VM read-only and never boot it again.

Step 4) Create a fleet of differencing disks all referencing the sysprepped image.

Step 5) Create a fleet of VMs each referencing a different differencing disk.

This will save 90+% of your disk space, and create dozens or hundreds of VMs very quickly.

When they turn on, they'll be freshly sysprepped ready to join the domain with unique IDs.

1 point · 3 days ago

Can I get one of these for being the first person to upvote this post and one of the 30 people subbed to this subreddit?

Check my post history for all the times I've supported fiberstore:

Netflix didn't like it, honestly I'm not exactly sure how but the majority of one's I tried didn't work anymore, though this was when they first started doing it I'm sure someone has figured out a way around by now I just haven't looked

see more
5 points · 4 days ago

it's pretty simple actually.

VPN providers have ranges of IP addresses they register from ARIN or simmilar.

Netflix simply keeps a list of which IP addresses or IP address ranges are VPN service endpoints and blocks those.

If you, for example, get a virtual machine in microsoft azure, build a VPN gateway on that virtual machine, and route your traffic through that (so it looks like you have an azure IP, not a "known VPN service" IP), it'll work with netflix US, no problem.

What are they even achieving here. Looks like fuck all to me.

see more
2 points · 5 days ago

The 3 men are splitting granite or some other kind of rock into harvestable slabs.

The "spikes" are wedges; Each of the wedges are inserted and hammered in at a consistent rate across the entire length of the cut so when the rock cracks it cracks in a nice straight line.

This video explains the process.

Its going to be used for CCNA/CCNP/CCIE level exams

Hahaha, that will never happen. I can see it for the CCIE because you have to travel for it already. But as /u/login_local said, Pearson is never going to implement this and Cisco isn't about to roll out test centers across the world.

see more
Original Poster3 points · 5 days ago

Well, according to the guy who is developing this, cisco requested it be created for exactly that purpose.

That's fine but I'll eat my hat if this is the case in five years.

see more
Original Poster5 points · 5 days ago

!RemindMe 5 Years

Load more comments

Can we get some integration into GNS3?

see more
1 point · 6 days ago

it works with VIRL.

2 points · 6 days ago

the SCVMM application is supported inside a VM running atop the infrastructure it is managing.

Cool, now I just need to find a way to join every flavour of android, iPhone, chromebook and IoT device to my Windows Domain and this method will work perfectly ;{>

see more
2 points · 7 days ago

for non AD-based devices you can request/issue certificates manually.

How would you go about doing that in a scalable and manageable way?

see more
2 points · 6 days ago

I never said it was a good option...


Load more comments

2 points · 7 days ago

realistically, we've been building with 4x 8320-32x40G as the core, and then sets of [2x 8320-48x10G] or [2x 8320-48x1G] as leaves.

This lets you do multi-40G connections to servers or access stacks, multi-10G SFP or 10GBase-T connections to servers or access stacks, multi 1G SFP or 1GBase-T connections to servers or access stacks or end-users, or single 1G/10G/40G connections to anything you need to.

Original Poster1 point · 6 days ago

Thanks for the suggestion and drawing! This feels a bit cleaner than what we've been considering. Are your leaves configured as VSX pairs?

see more
3 points · 6 days ago

Some times we do VSX pairs, some times we do purley layer-3 routed with VRF separation of security zones.

Load more comments

10 points · 7 days ago

I tend to prefer the brother label maker in cable wrap mode... best sharpie we've ever spent money on

see more
6 points · 7 days ago

I wouldn't get a good cable labeler.

i'd purchase Pre-fab rolls of labels.

I buy labels that look like these, pre-printed with my serial numbers:

I have each label printed in pairs, like this:

|000,001|    |000,001|    |000,002|    |000,002|    |000,003|
|000,001|    |000,001|    |000,002|    |000,002|    |000,003|
|000,001|    |000,001|    |000,002|    |000,002|    |000,003|
|000,001|    |000,001|    |000,002|    |000,002|    |000,003|
|       |    |       |    |       |    |       |    |       | 
|       |    |       |    |       |    |       |    |       | 
|       |    |       |    |       |    |       |    |       | 
|       |    |       |    |       |    |       |    |       | 
|       |    |       |    |       |    |       |    |       | 
|       |    |       |    |       |    |       |    |       | 
|       |    |       |    |       |    |       |    |       | 

so when I want to label a cable, I grab the 000,001 label and wrap it around one end of a cable. The numbers are printed 4 times each per label, so you can see it from any side so you don't have to twist the cable to read the label. The rest of the blank space is self-laminating, so the clear area wraps around overtop of the numbers to protect the ink from coming off.

I have rolls of serialized cable number pairs that look like this:

I just tell the manufacturer that I want them to pre-print the values in pairs, so I have 2 copies of each serialized number, and they send me rolls with ~ 2800 labels on each roll.

9 points · 7 days ago

not shoot there.

it isn't safe.

What happens if you loose an arrow by accident while drawing it back and it goes 15 degrees high? You put an arrow into someone's house or window.

3 points · 7 days ago

Fiber has a transmit and receive interface, hens 2 strands of fiber.

You have to connect a transmit port to a receive port and a receive port to a transmit port.

If the polarity is flipped in your cabling plant, you need to flip the ends at the transceivers.

There are type A, type B, and Type C polarities that come into play when designing/working with a fiber cabling plant.

Think of them as straight through, cross over, and roll over cables.

99% of the time, it's not worth worrying about; at the end of the day, if it works, it's not going to spontaneously break things if you reverse your polarities.

99% of the time, it's easier to simply try swapping your polarities and see if the link comes up. if it does, good. if it doesn't, now you have to start investigating why.

Original Poster2 points · 7 days ago

Sorry, I'm still learning here. Can you suggest some reading or ELI5 please? Ideally with a Juniper mindset rather than Cisco.

Are you saying that the TYPE of traffic I have could be a limiting factor on if ECMP is appropriate?

see more
4 points · 7 days ago

not "type" but flows.

ECMP is "equal cost multi-path".

It doesn't mean "Equally Contribute multi-path".

If you have [PC]---[router]---2paths---[router]---[PC]

with a single TCP flow going across it,

basically only 1 of the 2 paths is going to be used typically, because the ECMP algorhythm splits your traffic using various methods.

some algorhythms split traffic based on the HASH of the source and destination IP address; some include port numbers, some include other information.

if you have [100 PCs]----[router]---2paths---[router]----[100 servers]

with thousands of different TCP flows between the PCs and servers, you will see fairly even distribution of usage across the 2 paths.

Original Poster2 points · 7 days ago

Ah I see, thanks. That's really helpful. I have lots of servers on one end and lots of computers on the other so this should be beneficial.

I assume I have to configure this on both ends of the links?

see more
1 point · 7 days ago

any router between, yes.

You'll probably have to enable multi-path, then set a maximum paths to balance across, then an algorithm.

Load more comments

3 points · 7 days ago

There are SFPs which identify (to the slot they live in) as 1Gb/s, but do 100Mb/s. I imagine there's a dual-port bridge inside the transceiver to manage the speed change.

What sort of media do you intend to use?

The GLC-GE-100FX is one of these, but might not be you're looking for (fiber). If you need copper, why specify that it use an SFP?

This NANOG thread may be of interest.

see more
8 points · 7 days ago

"I identify as a 1Gb/s nic."

I get 3M DAC cables for $19 and I don't have to deal with padding down the light levels on short hops. Fiber attenuators just like to go bad more often....

see more
1 point · 7 days ago


$6 + $6 + $3.

No attenuation needed.

Multimode? My whole network is setup for single mode fiber (I run an ISP so everything in our world is single mode.)

see more
1 point · 7 days ago

why would you run singlemode where a direct attach cable could be used?

for anything structured, sure, use singlemode... but if you are going 3 meters...


Here's the 1GBase-LX version for $16.20:

$7 + $7 + $2.20.

Load more comments

5 points · 8 days ago

Rings aren't bad.

You just have to manage them correctly.

For example, HPE 5900 series switches can form an IRF stack over ethernet cabling at 10/40/100G.

This means you could have 8 switches with 2 in each of 4 different buildings cabled in an "infinity loop ring".

Building 1 has switch 1 and switch 5.

Building 2 has switch 2 and switch 6.

Building 3 has switch 3 and switch 7.

Building 4 has switch 4 and switch 8.


These 8 switches can form a single stack while being composed of an ethernet ring that can span 10's of kilometers.

You can even connect 1 to 5, 2 to 6, 3 to 7, and 4 to 8.

The important part is having an appropriately configured management system.

L2 loops are not a thing if you have a stacking fabric, a meshing fabric, a L2 overlay atop an L3 underlay, a purely routed infrastructure, or simply properly implemented spanning tree.

The important part is to have well trained staff who would not do things like cable a loop in switches that are not configured to deal with it.

Or.... get with the times... start routing everything. Loops aren't possible if every interface is routed.

Cake day
December 1, 2010
Moderator of these communities

3,797 subscribers


1,538 subscribers

Trophy Case (8)
Seven-Year Club

reddit gold

Since March 2018

Gilding II


Cookies help us deliver our Services. By using our Services or clicking I agree, you agree to our use of cookies. Learn More.