Plug your computer straight into the PDU via Ethernet cable with the 10.255.1.x address and see if you can access the GUI.
Hi, thanks for the suggestion.
Tried it, no-go.
I'll look into making a serial cable.
Did you use a crossover cable? The AP7920 is kinda old, though your computer is probably able to auto-detect which pins to use for TX/RX.
Which browser are you using, also? I can access my PDU web GUIs with Chrome, but if yours has really old firmware it might need internet explorer to work.
It's also very possible that the previous owner lied about resetting it. It's happened to me quite often from eBay sellers. If that's the case then there's no telling what the settings are, so it's difficult to tell exactly why you can't access it.
The part# for the serial cable is 940-0144. If you're in the US, they can be had on eBay for <$10. I think for that amount, it's worth saving the time and hassle of crafting a serial cable. APC I'm sure uses some proprietary pinout to ensure you have to use their hardware for stuff like this.
So I just went through this. I first bought this kit from eBay and when I installed it, it didn't work. I found a post on here, that I cant seem to find now, and it said I needed a new express card. I ordered this one, part number OPPH2J and installed it along with the enterprise card from the original kit, and it all worked great.
But yes, this part number has worked for me for every R210ii I've had to put an iDRAC in as well.
Having a hypervisor would allow you to segregate containers better based on role or service by hosting them on separate guests. It would also allow you to more easily experiment with different operating systems and container technology.
But if you don't do any virtualization at work or plan on doing it in your career I wouldn't say you're really missing anything.
Yeah, I was fearing I'd have to the FTP server... that thing is so ridiculously slow.
Going directly to 2.90 from 1.70 worked though, and after the firmware DVD worked fine as well, so the R310 is now up-to-date on everything, still need to do the R510.
Glad to hear it! I had issues upgrading with older firmware, and 1.7 is the only one I've found that can go straight to 2.9. in fact, I've had to roll back newer firmware to 1.7 to get to get to 2.9. It's the magic version I guess.
OmniOS does seem interesting. I'm simply trying to learn more BSD at the moment. I might look into things such as Solaris in the future
I use OmniOS CE, too, and I agree with /u/CollateralFortune. I migrated from FreeBSD and haven't looked back, either.
There is a bit of a learning curve, but the fundamentals are the same. A lot of the documentation from Oracle translates well, but some syntaxes might be out of date. It's not too hard to translate, I've found.
OmniOS CE is well supported, and if you like a GUI then napp-it is also a well supported package. It allows for surprisingly granular control for being a GUI.
If you're using ZFS, then this is as close as you can get to its native environment without using Solaris (which is no longer supported).
My man, you might want to check the CPUs in these again. X5650s are 6-core CPU but your screenshot shows dual 4-core. X5650s aren't too cheap so makes the price a steal or simply normal.
Indeed. Pair of X5650s is about $50. Pair of X5550s is about $10.
Not a bad deal for the local price, but for $160 it's average.
I might have missed it but did you mention how fast you're routing?
My understanding was that the ARM CPU couldn't handle it.
Very light routing on this switch only. Like I have a few utility networks I shove on there.
They won’t route anywhere near line-rate. The 10Gb version might go 2-3Gbps on a good day.
Disappointing, I guess I'll stick with SwOS then and keep letting the 3750X route stuff upstream.
Why the persistence with RouterOS? Principle of the matter?
Do you know the part numbers? I have some spare IBM rails to sell, and I'm in NY so shipping is cheap.
If they're the right ones, I'll give them to you for cost of shipping and a box, so probably less than $20 total
Edit: oops sorry man, I have 1U rails and the x3650 is definitely 2U
Was at that 3 Gbps bottleneck for some time. Was able to max out my 10Gb card when using windows server with a striped raid on 6 drives.
Will stick with freenas and its 3Gbps limit though as I do not want to loose data integrity and also it will be a cold day in hell when I use windows for a file server.
I can only get over 9Gbps when the direction is from my SAN to an ESXi Linux guest. The other way around doesn't work. And my Windows machines top at 2.4Gbps, and my Hyper-V Linux guests top at about 5Gbps. All on same VLAN so no routing.
It's weird, and it has to be a misconfiguration somewhere. I have jumbo frames enabled, can ping 9000 bytes unfragmented from everywhere, but file transfers are just slow. Whether it was to bulk storage RaidZ2 or RAID0 SSD's.
But I guess 300MBps is better than 30MBps for the time being hah.
Also I'm using OmniOS so it's not a FreeBSD/FreeNAS thing.
Can I ask which card you’re using? I’m still holding on to OmniOS for my filer as well and seem to recall reading that support had been dropped for the Mellanox cards.
Chelsio and Intel have good support and are the typical go to recommendations from the community. Not sure about mellanox. Taking a look at the IllumOS HCL, I only see IB cards from mellanox.
I think Oracle 11.3 supports mellanox?
An X520-DA1 is about $50, so if you want to stick with OmniOS I'd just go with that. You'll know your card will be supported for the foreseeable future.
The 12 bay do use an expander backplate with the SAS A and SAS B inputs. The internal bay headers are on the back plane as well. That makes them a little nicer in that regard versus the R720XD. However they are not rear hot swap like the R720XD either.
The internal bay headers are on the back plane as well. That makes them a little nicer in that regard versus the R720XD.
What do you mean? The flex bay on my R720xd has SAS cables that connect to the backplane.
There is a more or less standard SAS cable that runs from the front backplane to the hot swap assembly in the back on the R720XD. I believe it gets power from a header close to the board in the back. It still has a multiplier with the front backplane, but more hardware to add the 13-14th drive than an R510.
On the R510 the internal bays were just power and SAS/SATA data cables from backplane to the internal cage.
Ah okay, I gotcha. For some reason I read your post as saying something like the rear hotswaps are connected to something different like directly to the motherboard.
MPIO is what allows an iSCSI drive to be accessed by more than one machine simultaneously.
MPIO is a way to add performance and redundancy to iSCSI stores via additional network connections, not to explicitly allow multiple machines to access them simultaneously. It CAN fulfill that function, if you wanted a specific inititator to access via one IP/FQDN or another, but that's not its primary purpose.My understanding is that the filesystem used on the LUN is what matters, but beyond that, it's a free for all. One initiator could override data freely and independently that the other initiator thinks it has exclusive access to. This would apply to NTFS, FAT or presumably any other *nix filesystem that assumes exclusive access by the OS.
Yes, you're right. I was thinking incorrectly, thank you for clearing that up.
The disk witness is what coordinates the storage access/activity I believe to prevent the free for all.
Sorry to double up but, can you create a Hyper-V Cluster using just the Hyper-V hypervisor install thus not needed a windows license?
I used Server 2016 as opposed to Hyper-V Server, so I won't say this with 100% certainty, but probably. As far as I know, that ability isn't behind a paywall. I think the only advantage that using a Server install offers, depending on the version, is that the OS can do something besides host VMs and/or you can use AVMA.
And to answer a couple of your other questions:
A cluster shared volume (CSV) is just Microsoft's term for the shared drive that holds the virtual hard disks (VHDX) and VM configuration files (VMCX/VMRS).
At minimum, you need two CSV's: one as the data store and one as the disk witness in quorum. These will be separate shared drives that both hosts can connect to. The quorum disk holds the configuration database, and it tells which Hyper-V host should be active at any one time. It can be small, and unless you have an enormous cluster, 1GB will be plenty.
I had my cluster access the CSV's via iSCSI and multipath input/out (MPIO). MPIO is what allows an iSCSI drive to be accessed by more than one machine simultaneously. SMB is also able to be used by Hyper-V clusters. I'm sure there are benefits/drawbacks to using either one, and I don't know them, but I wanted the experience with configuring shared iSCSI storage. It's pretty simple, and this writeup is fantastic if you have no clue how to do it. You'll have it working in probably 15 minutes following that guide.
Sorry if the question wasn't more clear.
I'm wondering what could cause the ASA to log the internal IP as the external initiator and the external IP as the internal destination.
I look at thousands of these logs a day, but it's the first time I've seen this.
I totally missed what you were saying. That is interesting indeed. I'll have a look at our logs tomorrow and see if I can find anything similar.
I asked a friend who has his CCIE Security, and this is what he said though I haven't been able to find Cisco documentation on it.
If a connection passes from an interface with a higher security setting to a lower security setting then the ASA will log the connection as inbound.
I'm not sure that explains what I'm seeing here, though. Even if the interface PD were a DMZ interface, at best it would have the same level of trust as the WAN/outside interface. I don't see any situation where the outside interface would be more trusted than anything. DMZ or otherwise. And if they had the same security setting, then I would think that the ASA would still log it as outbound since it's exiting the outside interface.
Like I said I'm not the administrator for these ASA's, and they're probably the only one who can give me an exact answer. But I'm still curious if you're able to find anything similar.
edit: I know Netflow is Cisco's solution for seeing data flow, and the syslog is probably just the best guess based off of certain criteria, but unfortunately I don't have access to Netflow. Only the syslogs.
He posted an album with a lot of different stuff. The 6120 is in some of the later pics.
Since it's packaged up I can't take new photos without tearing it out. Also I'm lazy and don't want to waste tape. So I reused this photo album which I can't edit because I don't have an imgur account.
Though it is the size of a server, so some people might still get confused hah
The Mikrotik doesn't have the ability to change MTU
Then you would be better off w/o. You would need ALL devices in the chain to support jumbo frames. Having a switch that does not support it, but enabled on a NIC can SERVERSE impact your performance.
Probably, I'm not a fan of the management capabilities the Mikrotik offers.
However I'm able to ping at 9000 bytes without fragmenting the packets, so I don't think it's the Mikrotik.
what os are you using for your ping? kind to share an output?
c:\> ping -f -l 8972 10.10.10.8 Pinging 10.10.10.8 with 8972 bytes of data: Reply from 10.10.10.8: bytes=8972 time<1ms TTL=255 Reply from 10.10.10.8: bytes=8972 time<1ms TTL=255 Reply from 10.10.10.8: bytes=8972 time<1ms TTL=255 Reply from 10.10.10.8: bytes=8972 time<1ms TTL=255 Ping statistics for 10.10.10.8: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms
$ ping -M do -s 8972 10.10.10.8 PING 10.10.10.8 (10.10.10.8) 8972(9000) bytes of data. 8980 bytes from 10.10.10.8: icmp_seq=1 ttl=255 time=0.418 ms 8980 bytes from 10.10.10.8: icmp_seq=2 ttl=255 time=0.477 ms 8980 bytes from 10.10.10.8: icmp_seq=3 ttl=255 time=0.485 ms 8980 bytes from 10.10.10.8: icmp_seq=4 ttl=255 time=0.492 ms --- 10.10.10.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3064ms rtt min/avg/max/mdev = 0.418/0.468/0.492/0.029 ms
edit: 10.10.10.8 is OmniOS, my SAN
I use made up emails all the time if something requires me to register. No telling if they're real or not, so I imagine some people have gotten curious emails on my behalf before.
If there's no indication of your email being compromised I wouldn't sweat it. Change your passwords if it will help you feel better.
Yes I am using powerchute, and only the master computer is connected to it. No other options to connect to it
Each computer you want managed by powerchute has to have powerchute installed and also connected to the UPS via USB.
There are other options such as NUT which has been mentioned already.