Author Archives: gregcarriger

About gregcarriger

https://github.com/gregcarriger

Per Folder vSphere-vCheck Reporting

Check out vCheck if you haven’t already. It’s makes finding and correcting issues in a vSphere environment much easier.

The script inspects the assets controlled by one vCenter server. I found a way to break down the reporting to the folder level. This is useful when you have a development team lead that needs to keep resources under control.

Run “vCheck-vSphere/Select-Plugins.ps1” and only select the plugins that your group administrator will care about. I limited the plugins to VM issues.

Edit the script: “vCheck-vSphere/Plugins/00 Initialize/00 Connection Plugin for vCenter.ps1″

Around line 202 there is the cmdlet that pulls back all the VMs. You can limit the pull to a single folder, resource pool, or even datastore here. I limited it by folder and used ID just in case there were duplicate names.

#original
$VM = Get-VM | Sort-Object Name

#per folder example
$VM = get-folder -id Folder-group-v1,Folder-group-v2,Folder-group-v3 | Get-VM | Sort-Object Name

Big thanks to Alan Renouf and all the other vCheck contributors!

Advertisements

Evacuate Datastore Script When Migrating to a New SAN

One of the coolest features of VMware’s DRS is that you can place a host in maintenance mode, and VMs will just start vMotioning over to other hosts. It makes maintenance a breeze. I assumed this same functionality would exist when you place a Datastore in maintenance mode, but nope. No dice.

Here’s a script to move virtual disks off a datastore and on to a Datastore Cluster. This is useful when you are migrating to a different SAN.

I found an error when migrating, so there is a try/catch in the script to exit on error. The script will keep Storage vMotioning on failure, so a try/catch was necessary.

# Evacuate DataStore
# By Greg Carriger
#==========
# Variables
#==========
$EvacuatingDataStore = “OldDatastore”
$DestinationDataStoreCluster = “NewDatastoreCluster”
#==========
# Script
#==========
try {
Get-Datastore $EvacuatingDataStore | get-vm | ForEach-Object {
$FreeDataStore = Get-DatastoreCluster -name $DestinationDataStoreCluster | Get-Datastore | Sort-Object -Property FreeSpaceGB -Descending | Select-Object -First 1
move-vm -vm $_.Name -Datastore $FreeDataStore
Get-DatastoreCluster -name $DestinationDataStoreCluster | Get-Datastore | sort-object FreeSpaceGB -Descending | select name,FreeSpaceGB | ft
$VMsleft = (Get-Datastore $EvacuatingDataStore | get-vm | measure-object).count
Write-Output “VMs left $VMsleft”
}
}
catch {
write-output “Encountered an error. “”To resolve Operation is not valid due to the current state of the object”” error, please restart all powershell sessions.”
exit
}

Possible enhancements:

1. logging

2. Test for enough space before migrating.

Batch Update vSphere tags with Rubrik SLA Info

Batch Update vSphere tags with Rubrik SLA Info

Guess I should start with why. This was written so VM owners can check what types of backups they should expect. This avoids adding a ton of people as rubrik users.

First install the community supported Rubrik Powershell module.

Run powershell.exe

Install-Module -Name Rubrik -Scope CurrentUser

Get connected and dump out your Bronze, Silver, and Gold SLA machines.  This assumes you are using default SLAs.

Make sure you update the two variables at the top to make your environment

$RubrikURL = RUBRIK-URL.DOMAIN.COM
$WorkingDir = ~\Desktop\RubrikVMs.csv

connect-Rubrik $RubrikURL
$data = Get-RubrikVM -SLA Bronze | Select-Object name,moid,effectiveSladomainname
$data += Get-RubrikVM -SLA Silver | Select-Object name,moid,effectiveSladomainname
$data += Get-RubrikVM -SLA Gold | Select-Object name,moid,effectiveSladomainname
$data | export-csv $WorkingDir

Update your vSphere VMs with the backup status. I build out my code in excel and then execute it with PowerCLI.

$vCenterURL = VCENTER-URL.DOMAIN.COM

## Excel code to build your PowerCLI.
=CONCATENATE("Get-VM -id VirtualMachine-",B2," -name """,A2,""" | New-TagAssignment –Tag ",C2)
## Example of the completed code
connect-viserver $vCenterURL
Get-VM -id VirtualMachine-vm-123 -name "VirtualMachineName" | New-TagAssignment –Tag Bronze

Run the code created from excel, and you’ve updated all your machines. I find VMs using both the ID and the name, because you could get duplicates when running multiple vCenters. The absolute best way would be to use the hashed ID and/or the unique vCenter ID.

I’ll have to do this exercise again later, and I’ll have to make sure VMs that have been removed from SLA are covered. This is just the initial code.

Improvements for later

  • Use more unique VM identification.
  • pull back VMs that are not covered by SLA and make sure they are updated too.

Batch check SSL Certificates on CentOS

Hello all,

I’ve moved to management and don’t get to do the fun stuff as much, but recently I got to script a SSL check as we didn’t have engineering resources to complete the task. Yay!

WARNING: Do not copy/paste code from websites. Sites can inject funny stuff into lines that you cannot see.

Single URL test prep work

  1. Get a Linux VM. I chose CentOS 7, but you could use almost anything.
  2. Get a list of URLs to check.

Install the Qualys SSL checker to CentOS

  1. ssh to your Linux box.
  2. Install the Go language if it isn’t already.
    • sudo yum install golang
  3. Grab the Qualys SSL labs tester binary for Linux. OSX and Win is also available.
    • curl -O https://github.com/ssllabs/ssllabs-scan/releases/download/v1.3.0/ssllabs-scan_1.3.0-linux64.tgz
  4. Unzip the binary
    • tar -zxvf ssllabs-scan_1.3.0-linux64.tgz
  5. Make your binary executable.
    • chmod +x ssllabs-scan
  6. Test it out!

Prep work for multiple URLs.

  1. Import our list
    • touch sitelist
    • vi sitelist
    • hit a to edit, and then paste in your URL list
    • hit ESC to get our of edit mode.
    • wq
    • hit enter
  2. Test it on our sitelist.
    • ./ssllabs-scan -json-flat=true -hostfile=sitelist > results.json
  3. Does it look okay?
    • more results.json
    • hit q to exit

Convert to CSV. If your brain has atrophied from being in management and you can no longer read json.

  1. Install epel repo, pip, lxml, and most importantly csvkit. You need epel before you can install pip.
    • sudo yum install epel-release
    • sudo yum install python-pip
    • sudo pip install --upgrade pip
    • sudo pip install csvkit
    • sudo pip install lxml==3.4.2
  2. Convert!
    • in2csv results.json > results.csv
  3. Does it look like a csv?
    • more results.csv
    • hit q to exit

Big thanks to https://github.com/wireservice/csvkit and Qualys https://github.com/ssllabs/ssllabs-scan

Dude, where’s my vxlan?

Strange stuff happens when you go over 1000 IGMP snooping group on a UCS fabric interconnect. Cisco UCS 6100 Series Fabric Interconnect supports up to 1000 IGMP snooping groups and Cisco UCS 6200 Series Fabric Interconnect supports up to 4000 IGMP snooping groups. Here’s how to check if you’ve taken your vxlan to the limit as vxlan relies on IGMP snooping groups to keep chatter down to a minimum on your switch.

In ESXi:
# use the vmk interface with vxlan running on it. I used 1 in this example.
tcpdump-uw -i vmk1 igmp
On your UCS Fabric Interconnect:
show ip igmp snooping groups

To clean your data you can:
cut -d' ' -f4 (your fabric interconnect output) | sort -u > (new file for fabric interconnect)
cut -d' ' -f5 (your esxi output) | sort -u > (new file for esxi)

Next, run a diff on your two files and anything that exists on esxi but not your FI is your problem! enjoy!