Posts Tagged ‘Powershell’

Bulk-modification of deployment deadlines in SCCM

A full two years into testing we finally are moving forward with production deployment of our Configuration Manager 2012 (SCCM) environment. Last month we (recklessly?) migrated 1000 workstations into the environment. While the deployment was a technological success, it was a bit of black-eye in the PR department.

Client computers almost uniformly did an unplanned reboot one hour after the SCCM agent got installed on their workstations. In addition to that, many clients experienced multiple reboot requests over the coming days. Many client reported that they did not get the planned 90-minute impending reboot warning, but only the 15-minute countdown timer.

Lots of changes were required to address this situation:

See the “Suppress and required client restarts” setting documented here:

http://technet.microsoft.com/en-us/library/gg682067.aspx#BKMK_EndpointProtectionDeviceSettings

This one was causing clients to reboot following upgrade of their existing Forefront Endpoint Protection client to SCEP. That explained the unexpected 60-minute post-install reboot.

Next, we decided to change the reboot-after deadline grace period from 90 minutes to 9 hours, with the final warning now set to one hour, up from 15 minutes. This should allow people to complete work tasks without having updates interrupt their work day.

Finally, we are planning to reset the deployment deadline for all existing software update deployments to a time several days out from the initial client installation time. Since we have several dozen existing software update group deployments, we need a programmatic approach to completing this task. The key to this was found here:

http://www.scconfigmgr.com/2013/12/01/modify-the-deadline-time-of-an-adr-deployment-with-powershell/

Thanks to Nickolaj Andersen for posting this valuable script.

It did take me a bit of time to decode what Nickolaj was doing with his script (I was not already familiar with the Date/Time format generally used in WMI). I modified the code to set existing update group deployments to a fixed date and time provided by input parameters. I also added some in-line documentation to the script, and added a few more input validation checks:

# Set-CMDeploymentDeadlines script
#   J. Greg Mackinnon, 2014-02-07
#   Updates all existing software update deployments with a new enforcement deadline.
#   Requires specification of: 
#    -SiteServer (an SCCM Site Server name)
#    -SiteCode   (an SCCM Site Code)
#    -DeadlineDate
#    -DeadlineTime
#

[CmdletBinding()]

param(
    [parameter(Mandatory=$true)]
    [string] $SiteServer,

    [parameter(Mandatory=$true)]
    [string] $SiteCode,

    [parameter(Mandatory=$true)]
    [ValidateScript({
        if (($_.Length -eq 8) -and ($_ -notmatch '[a-zA-Z]+')) {
            $true
        } else {
            Throw '-Deadline must be a date string in the format "YYYYMMDD"'
        }
    })]
    [string] $DeadlineDate,

    [parameter(Mandatory=$true)]
    [ValidateScript({
        if (($_.Length -eq 4) -and ($_ -notmatch '[a-zA-Z]+')) {
            $true
        } else {
            Throw '-DeadlineTime must be a time string in the format "HHMM", using 24-hour syntax' 
        }
    })]
    [string] $DeadlineTime
)

Set-PSDebug -Strict

# WMI Date format is required here.  See:
# http://technet.microsoft.com/en-us/library/ee156576.aspx
# This is the "UTC Date-Time Format", sometimes called "dtm Format", and referenced in .NET as "dmtfDateTime"
#YYYYMMDDHHMMSS.000000+MMM
#The grouping of six zeros represents milliseconds.  The last cluster of MMM is minute offset from GMT.  
#Wildcards can be used to for parts of the date that are not specified.  In this case, we will not specify
#the GMT offset, thus using "local time".

# Build new deadline date in WMI Date format:
[string] $newDeadline = $DeadlineDate + $DeadlineTime + '00.000000+***'
Write-Verbose "Time to be sent to the Deployment Object: $newDeadline"
 

# Get all current Software Update Group Deployments.
# Note: We use the WMI class "SMS_UpdateGroupAssignment", documented here:
# http://msdn.microsoft.com/en-us/library/hh949604.aspx
# Shares many properties with "SMS_CIAssignmentBaseClass", documented here:
# http://msdn.microsoft.com/en-us/library/hh949014.aspx 
$ADRClientDeployment = @()
$ADRClientDeployment = Get-WmiObject -Namespace "root\sms\site_$($SiteCode)" -Class SMS_UpdateGroupAssignment -ComputerName $SiteServer

# Loop through the assignments setting the new EnforcementDeadline, 
# and commit the change with the Put() method common to all WMI Classes:
# http://msdn.microsoft.com/en-us/library/aa391455(v=vs.85).aspx
  
foreach ($Deployment in $ADRClientDeployment) {

    $DeploymentName = $Deployment.AssignmentName

    Write-Output "Deployment to be modified: `n$($DeploymentName)"
    try {
        $Deployment.EnforcementDeadline = "$newDeadline"
        $Deployment.Put() | Out-Null
        if ($?) {
            Write-Output "`nSuccessfully modified deployment`n"
        }
    }
    catch {
        Write-Output "`nERROR: $($_.Exception.Message)"
    }
}

We additionally could push out the deployment time for Application updates as well using the “SMS_ApplicationAssignment” WMI class:

http://msdn.microsoft.com/en-us/library/hh949469.aspx

In this case, we would want to change the “UpdateDeadline” property, since we do not set a “deployment deadline” for these updates, but instead are using application supersedence rules to control when the updates are deployed.

Automated Driver Import in MDT 2013

As a follow up to my previous post, I also have developed a script to automate the import of drivers into MDT 2013.  This PowerShell script takes a source folder structure and duplicates the top two levels of folders in the MDT Deployment Share “Out-of-box drivers ” branch.  The script then imports all drivers found in the source directories to the matching folders in MDT.

All we have to do is extract all drivers for a given computer model into an appropriately named folder in the source directory, and then run the script.

################################################################################
#
#  Create-MDTDriverStructure.ps1
#  J. Greg Mackinnon, University of Vermont, 2013-11-05
#  Creates a folder structure in the "Out of Box Drivers" branch of a MDT 2013
#    deployment share.  The structure matches the first two subdirectories of 
#    the source filesystem defined in $srcRoot.  All drivers contained within
#    $srcRoot are imported into the deployment share.
#
#  Requires: 
#    $srcDir - A driver source directory, 
#    $MDTRoot - a MDT 2013 deployment share
#    - MDT 2013 must be installed in the path noted in $modDir!!!
#
################################################################################

# Define source driver directories:
[string] $srcRoot = 'E:\staging\drivers\import'
[string[]] $sources = gci -Attributes D $srcRoot | `
    Select-Object -Property name | % {$_.name.tostring()}
	
# Initialize MDT Working Environment:
[string] $MDTRoot = 'E:\DevRoot'
[string] $PSDriveName = 'DS100'
[string] $oobRoot = $PSDriveName + ":\Out-Of-Box Drivers"
[string] $modDir = `
	'C:\Program Files\Microsoft Deployment Toolkit\Bin\MicrosoftDeploymentToolkit.psd1'
Import-Module $modDir
New-PSDrive -Name "$PSDriveName" -PSProvider MDTProvider -Root $MDTRoot


foreach ($source in $sources){
    Write-Host "Working with source: " $source -ForegroundColor Magenta
    # Create the OOB Top-level folders:
    new-item -path $oobRoot -name $source -itemType "directory" -Verbose
    # Define a variable for the current working directory:
    $sub1 = $srcRoot + "\" + $source
    # Create an array containing the folders to be imported:
    $items = gci -Attributes D $sub1 | Select-Object -Property name | % {$_.name.tostring()}
    $oobDir = $oobRoot + "\" + $source

    foreach ($item in $items) {
		# Define source and target directories for driver import:
	    [string] $dstDir = $oobDir + "\" + $item
	    [string] $srcDir = $sub1 + "\" + $item
	
	    # Clean up "cruft" files that lead to duplicate drivers in the share:
		Write-Host "Processing $item" -ForeGroundColor Green
	    Write-Host "Cleaning extraneous files..." -ForegroundColor Cyan
        $delItems = gci -recurse -Include version.txt,release.dat,cachescrubbed.txt $srcDir
        Write-Host "Found " $delItems.count " files to delete..." -ForegroundColor Yellow
	    $delItems | remove-Item -force -confirm:$false
        $delItems = gci -recurse -Include version.txt,release.dat,cachescrubbed.txt $srcDir
        Write-Host "New count for extraneous files: " $delItems.count -ForegroundColor Yellow

	    # Create the target directory:
		Write-Host "Creating $item folder" -ForegroundColor Cyan
	    new-item -path $oobDir -name $item -itemType "directory" -Verbose
	
	    # Import all drivers from the source to the new target:
		Write-Host "Importing Drivers for $item" -ForegroundColor Cyan
	    Import-MDTDriver -Path $dstDir -SourcePath $srcDir 
		
        Write-Host "Moving to next directory..." -ForegroundColor Green
		
    } # End ForEach Item
} # End ForEach Source

Remove-PSDrive -Name "$PSDriveName"

 

Rethinking Driver Management in MDT 2013

We have been using the Microsoft Deployment Toolkit (MDT) in LTI/Lite Touch mode here at the University for a long time now.  Why, we used it to deploy XP back when MDT still was called the Business Desktop Deployment Solution Accelerator (BDD).  In this time, we have gone though several different driver management methods.  Gone are the nightmare days of dealing with OEMSETUP files, $OEM$ directories, can elaborate “DriverPack” injection scripts for XP (thank goodness).  

With the release of Vista, we moved from a PnP free-for-all model of driver detection.  After Windows 8.0, we found we really needed to separate our drivers by operating system.  Thus, we created Win7, Win8, and WinPE driver selection profiles.

But now we are finding that driver sprawl is becoming a major issue again.  On many new systems we run though a seemingly successful deployment, but end up with a non-responsive touch screen, a buggy track pad, and (sometimes) a very unstable system.

Starting this week, we are trying a new hybrid driver management approach.  We will create a driver folder for each computer model sold though our computer depot.  I have developed a custom bit of VBScript to check to see if the hardware being deployed to is a known model.  Driver injection will be restricted to this model if a match is found.  The script contains additional logic to detect support for both Windows 7 and Windows 8 variants, and to select the most current drivers detected.  Unknown models will fall back on the PnP free-for-all detection method.

Here is how it works…

  1. Create a new group in your OS deployment task sequence named “Custom Driver Inject”, or something similar.  Grouping all actions together will allow easier transfer of these custom actions to other Task Sequences:
    DM-DriverInjectGroup      
  2. Under this new group, add a new action of type “Set Task Sequence Variable”.  Name the variable “TargetOS”,and set the value to the OS that you are deploying from this task sequence.  You must follow the same naming convention that you use in your Out-of-box driver folder.  I use Win(X), where (X) is the major OS version of the drivers in the folder.  In this example, I have chose “Win8″:
    DM-SetTargetOS
  3. Add an action of type “Run Command Line”.  Name this action “Supported Model Check”.  Under the Command line field, enter “cscript “%SCRIPTROOT%\ZUVMCheckModel.wsf”.  (We will import this script into the deployment share later on.)
    DM-RunModelCheckScript
  4. Add a sub-group named “Supported Model Actions”.  Under the “Options” tab, add a condition of type “Task Sequence Variable”.  Use the variable “SupportedModel”, the Condition “equals”, and the Value “YES”.  (The SupportedModel variable gets set by the CheckModel script run in the previous step.):
    DM-ConditionalGroup
  5. Under this new group, add a new action of type “Set Task Sequence Variable”.  Name this task “Set Variable DriverGroup002″.  Under “Task Sequence Variable”, set “DriverGroup002″, and set the value to “Models\%TargetOS%\%Model%”.  (Note:  You could use “DriverGroup001″, but I already am using this variable to hold a default group of drivers that I want added to all systems.  The value “%TargetOS%\%Model%” defines the path to the driver group in the deployment share.  If you use a different folder structure, you will need to modify this path.):
    DM-SetDriverGroup
  6. Create a new task of type “Inject Drivers”.  Name this task “Inject Model-Specific Drivers”.  For the selection profile, select “Nothing”.  Be sure to select “Install all drivers from the selection profile”.  (NOTE: The dialog implies that we will be injecting only divers from a selection profile.  In fact, this step will inject drivers from any paths defined in any present “DriverGroupXXX” Task Sequence variables.)
    DM-InjectModelDrivers
  7. Now, under our original Custom Driver Inject group, add a new task of type “Inject Drivers”.  Choose from the selection profile “All Drivers”, or use a different fallback selection profile that suits the needs of your task sequence.  This time, select “Install only matching drivers from the selection profile”:
    DM-InjectUnsupported1
    Under the “Options” tab, add the condition where the “Task Sequence Variable” named “Supported Model” equals “NO”:
    DM-InjectUnsupported2
    This step will handle injection of matching drivers into hardware models for which we do not have a pre-defined driver group.
  8. Optionally, you now can open the “CustomSettings.ini” file and add the following to your “Default” section:
         DriverGroup001=Peripherals
    (I have a “Peripherals” driver group configured which contains USB Ethernet drivers used in our environment.  These are a necessity when deploying to hardware that does not have an embedded Ethernet port, such as the Dell XPS 12 and XPS 13.  You also could add common peripherals with complicated drivers such as a DisplayLink docking station or a Dell touch screen monitor.)
  9. Add the “ZUVMCheckMedia.wsf” script to the “Scripts” folder of your deployment share.  The code for this script is included below.  I think the script should be fairly easy to adapt for your environment.
  10. Finally, structure your “Out-of-Box Drivers” folder to contain a “Models” folder, and a single folder for each matching hardware model in your environment.  I get most of our driver collections from Dell:
    http://en.community.dell.com/techcenter/enterprise-client/w/wiki/2065.dell-driver-cab-files-for-enterprise-client-os-deployment.aspx
    (NOTE:  Thanks Dell!)
    The real challenge of maintaining this tree is in getting the model names right.  Use “wmic computersystem get model” to discover the model string for any new systems in your environment.  A table of a few current models I have been working with is included below.

Dell Marketing Model Name to WMI Name Translator Table:

  • Dell XPS 12 (first generation) – “XPS 12 9Q23″
  • Dell XPS 12 (second generation) – “XPS 12-9Q33″
  • Dell XPS 13 (first generation) – “Dell System XPS L321X”
  • Dell XPS 13 (second generation) – “Dell System XPS L322X”
  • Dell XPS 14 – “XPS L421Q”
  • Dell Latitude 10 – “Latitude 10 – ST2″
  • VMware Virtual Machine – “VMware Virtual Platform”
  • Microsoft Hyper-V Virtual Machine – “Virtual Machine”

A fun nuance we encountered last week was a Latitude E5430 model that contained “no-vPro” after the model number. Dell does not provide separate driver CABs for vPro/non-vPro models, so I added a regular expression test for Latitudes, and strip any cruft after the model number. There is one more problem down…

The following site contains a list of older model name translations:
http://www.faqshop.com/wp/misc/wmi/list-of-wmic-csproduct-get-name-results
As you can see, most Latitudes and Optiplexes follow sane and predictable model name conventions. I wish the same were true for the XPS.

Finally, I am indebted to the following sources for their generously detailed posts on driver management. Without their help, I doubt I would have been able to make this solution fly:

Jeff Hughes of the Windows Enterprise Support Server Core Team:
http://blogs.technet.com/b/askcore/archive/2013/05/09/how-to-manage-out-of-box-drivers-with-the-use-of-model-specific-driver-groups-in-microsoft-deployment-toolkit-2012-update-1.aspx

Andrew Barnes (aka Scriptimus Prime), whose posts on MDT driver management give the basics DriverGroups and model selection:
http://scriptimus.wordpress.com/2013/02/25/ltizti-deployments-injecting-drivers-during-deployment/
AND, of automating driver import into MDT (written for MDT 2012… some changes required for 2013):
http://scriptimus.wordpress.com/2012/06/08/mdt-2012-creating-a-driverstore-folder-structure/

The incredible Michael Neihaus, who in this post discusses the use of DriverGroups and Selection Profiles:
http://blogs.technet.com/b/mniehaus/archive/2009/09/09/mdt-2010-new-feature-19-improved-driver-management.aspx

And finally Eric Schloss of the “Admin Nexus”, who give me the idea of developing a fallback for systems that do not match a known model. It was this key bit of smarts that gave me the confidence to move forward with a model-specific driver grouping strategy:
http://adminnexus.blogspot.com/2012/08/yet-another-approach-to-driver.html

ZUVMCheckModel.wsf script:

(NOTE: WordPress stripped off the WSF headers and footers from my script. These are the first three and last two lines in the script. If you copy from this post, please remember to place greater than and less than tags around these lines before running, as indicated in the comments.)

' Uncomment and wrap each of the following three lines in less than/greater than characters to convert them to tags.
'job id="ZUVMCheckModel"
'script language="VBScript" src="ZTIUtility.vbs"/
'script language="VBScript"

Option Explicit
RunNewInstance

'//--------------------------------------------------------
'// Main Class
'//--------------------------------------------------------
Class ZUVMCheckModel
	
	'//—————————————————————————-
	'//  Constructor to initialize needed global objects
	'//—————————————————————————-
	Private Sub Class_Initialize
	End Sub
	
	'//--------------------------------------------------------
	'// Main routine
	'//--------------------------------------------------------

	Function Main()
	' //*******************************************************
	' //
	' // File: ZTIUVMCheckModel.wsf
	' //
	' // Purpose: Checks the model of this system against
	' //          a list of known machine models.  Returns
	' //          TRUE if a matching model is detected.
	' //
	' // Usage: cscript ZUVMCheckModel.wsf /Model: [/debug:true]
	' //
	' //*******************************************************
	
	'Use the following lines for debugging only.
	'oEnvironment.Item("TargetOS") = "Win7"
	'oEnvironment.item("DeployRoot") = "c:\local\mdt"
	'oEnvironment.Item("Model") = "Latitude E6500 some annoying variation"
	'End debug Params

	  Dim aModels()          'Array of models taken from DriverGroups.xml
	  Dim bOldDrivers        'Boolean indicating drivers present for an older OS version
	  Dim i                  'Generic integer for looping
	  Dim j                  'Generic integer for looping
	  Dim iRetVal            'Return code variable
	  Dim iMaxOS             'Integer representing the highest matching OS driver store
	  Dim oRegEx
	  Dim oMatch
	  Dim match
	  Dim oXMLDoc            'XML Document Object, for reading DriverGroups.xml
	  Dim Root,NodeList,Elem 'Objects in support of oXMLdoc
	  Dim sDGPath            'Path to DriverGroups.xml file
	  Dim sInitModel         'String representing the initial value of
	                         '   oEnvironment.Item("Model")
	  Dim sItem	             'Item in aModels array.
	  Dim sMaxOS             'OS Name of highest matching OS driver store
	  Dim sOSFound           'OS Name for a given matching driver set.
	  
	  oLogging.CreateEntry "Begin ZUVMCheckModel...", LogTypeInfo
	  
	  'Set the default values:
	  oEnvironment.Item("SupportedModel") = "NO"
	  iMaxOS = CInt(Right(oEnvironment.Item("TargetOS"),1))
	  'wscript.echo "Default value for iMaxOS = " & iMaxOS
	  bOldDrivers = false
	  sInitModel = oEnvironment.Item("Model")
	  'wscript.echo "sInitModel value = " & sInitModel
	  
	  Set oRegEx = New RegExp
	  oRegEx.Global = True
	  oRegEx.IgnoreCase = True
	  
	  'Modify the detected model name to handle known variations:
	  oRegEx.pattern = "Latitude"
	  if oRegEx.test(sInitModel) then
		oLogging.CreateEntry "Model is a Latitude.  Cleaning up the model name...", LogTypeInfo
		oRegEx.pattern = " "
		set oMatch = oRegEx.Execute(sInitModel)
		'wscript.echo "oMatch Count is: " & oMatch.count
		if oMatch.Count > 1 then
			i = oMatch.item(1).FirstIndex
			oEnvironment.Item("Model") = Left(sInitModel,i)
			'wscript.echo """"&oEnvironment.Item("Model")&""""
		end if
	  end if

	  'Check for DriverGroups.xml file, which will contain the supported model list:
	  iRetVal = Failure
	  iRetVal = oUtility.FindFile("DriverGroups.xml", sDGPath)
	  if iRetVal  Success then
		oLogging.CreateEntry "DriverGroups file not found. ", LogTypeError
		exit function
	  end if 
	  oLogging.CreateEntry "Path to DriverGroups.xml: " & sDGPath, LogTypeInfo
	  
	  'Parse the DriverGroups.xml file:
	  oLogging.CreateEntry "Parsing DriverGroups.xml...", LogTypeInfo
	  Set oXMLDoc = CreateObject("Msxml2.DOMDocument") 
	  oXMLDoc.setProperty "SelectionLanguage", "XPath"
	  oXMLDoc.load(sDGPath)
	  Set Root = oXMLDoc.documentElement 
	  Set NodeList = Root.getElementsByTagName("Name")
	  oLogging.CreateEntry "NodeList Member Count is: " & NodeList.length, LogTypeInfo
	  'oLogging.CreateEntry "NodeList.Length variant type is: " & TypeName(NodeList.Length), LogTypeInfo
	  i = CInt(NodeList.length) - 1
	  ReDim aModels(i) 'Resize aModels to hold all matching DriverGroup items.
	  'oLogging.CreateEntry "List of Available Driver Groups:", LogTypeInfo
	  i = 0
	  For Each Elem In NodeList
		if InStr(Elem.Text,"Models\") then
			aModels(i) = Mid(Elem.Text,8)	'Add text after "Models\"
			'oLogging.CreateEntry aModels(i), LogTypeInfo
			i = i + 1
		end if
	  Next
	  oLogging.CreateEntry "End Parsing DriverGroups.xml.", LogTypeInfo

	  'Loop through the list of supported models to find a match:
	  oLogging.CreateEntry "Checking discovered driver groups for match to: " & oenvironment.Item("Model"), LogTypeInfo
	  For Each sItem in aModels
		oLogging.CreateEntry "Checking Driver Group: " & sItem, LogTypeInfo
		i = InStr(1, sItem, oEnvironment.Item("Model"), vbTextCompare)

		'wscript.echo TypeName(i) 'i is a "Long" number type.
		If i  0 Then
			oLogging.CreateEntry "Matching Model found.", LogTypeInfo
			
			j = InStr(sItem,"\")
			sOSFound = Left(sItem,j-1)
			'wscript.echo "sOSFound = " & sOSFound 
			if (InStr(1,sOSFound,oEnvironment.Item("TargetOS"),vbTextCompare)  0) then
				oLogging.CreateEntry "Drivers matching the requested OS are available.  Exiting with success.", LogTypeInfo
				oEnvironment.Item("SupportedModel") = "YES"
				iRetVal = Success
				Main = iRetVal
				Exit Function
			end if
			if iMaxOS > CInt(Right(sOSFound,1)) then
				iMaxOS = CInt(Right(sOSFound,1))
				'wscript.echo "iMaxOS = " & iMaxOS
				sMaxOS = sOSFound
				bOldDrivers = true
				'wscript.echo "sMaxOS = " & sMaxOS
			end if
		End If
	  Next
		
	  If bOldDrivers Then 'Run if sMaxOS is defined... set a boolean when this is defined and test against that?
		oLogging.CreateEntry "Model drivers were found for an OS older than the one selected...", LogTypeWarning
		oEnvironment.Item("SupportedModel") = "YES"
		oEnvironment.Item("TargetOS") = sMaxOS
	  Else
	    oLogging.CreateEntry "No matching drivers were found for this model.", LogTypeInfo
	  End If
	  
	  oLogging.CreateEntry "End ZUVMCheckModel.", LogTypeInfo

	  iRetVal = Success
	  Main = iRetVal

	End Function

End Class

' Uncomment and wrap each of the following two lines in less than/greater than characters to convert them to tags.
'/script
'/job

Improving Notifications in System Center Operations Manager 2012

Anyone who depends on System Center Operations Manager 2012 (or any earlier version of SCOM, back to MOM) likely has noticed that notifications are a bit of a weak spot in the product.

To address this, we have use the “command channel” to improve the quality of messages coming out of SCOM.  Building on the backs of giants, we implemented a script that takes an AlertID from SCOM, and generated nicely formatted email and alpha-numeric pager messages with relevant alert details.

More recently, we have identified the need to generate follow-up notifications when an initial alert does not get addressed.  I went back to our original script, and updated it to use a new, custom Alert ResolutionState (“Notified”), and I have added logic to update the Alert CustomField1 and CustomField2 with data that is useful in determining whether or not an alert should get a new notification, and how many times follow-up notifications have been sent.

Heart-felt appreciation goes out to Tao Yang for his awesome work on his “SCOMEnhancedEmailNotification.ps1″ script, which served as the core for my work here.

Here is my version… I don’t have a lot of time to explain it, but hopefully the comments give you enough to go on. Apologies for the rather bad munging of quotation marks… wordpress hates me this month. If you want to use this code, search for ampersand-quot-semicolon, replace with actual quotation marks.

#=====================================================================================================
# AUTHOR:	J. Greg Mackinnon, Adapted from 1.1 release by Tao Yang 
# DATE:		2013-05-21
# Name:		SCOMEnhancedEmailNotification.PS1
# Version:	3.0
# COMMENT:	SCOM Enhanced Email notification which includes detailed alert information
# Update:	2.0 - 2012-06-30	- Major revision for compatibility with SCOM 2012
#								- Cmdlets updated to use 2012 names
#								- "Notified" Resolution Status logic removed
#								- Snapin Loading and PSDrive Mappings removed (replaced with Module load)
#								- HTML Email reformatted for readability
#								- Added '-format' parameter to allow for alphanumeric pager support
#								- Added '-diag' boolean parameter to create options AlertID-based diagnostic logs
# Update:   2.2 - 2013-05-16    - Added logic to update "CustomField1" alert data to reflect that notification has been sent for new alerts.
#								- Added logic to update "CustomField2" alert data to reflect the repeat count for new alert notification sends.
#								- Added support for specifying alerts with resolution state "acknowledged"
#                               - Did some minor adjustments to improve execution time and reduce memory overhead.
# Update:	3.0 - 2013-05-20	- Updated to reduce volume of PowerShell instance spawned by SCOM.  Added "mailTo" and "pageTo" paramerters to allow sending of both short
#                                         and long messages from a single script instance.
#								- Converted portions of script to subroutine-like functions to allow repetition (buildHeaders, buildPage, buildMail)
#								- Restored "Notified" resolution state logic.
#								- Renamed several variables for my own sanity.
#								- Added article lookup updates from Tao Yang 2.0 script.
# Usage:	.\SCOMEnhancedEmailNotification.ps1 -alertID xxxxx -mailTo @('John Doe;jdoe@mail.com','Richard Roe;rroe@provider.net') -pageTo @('Team Pager;teampage@page.provider.com')
#=====================================================================================================
#In OpsMgr 2012, the AlertID parameter passed in is '$Data/Context/DataItem/AlertId$' (single quote)
#Quotation marks are required otherwise the AlertID parameter will not be treated as a string.
param(
	[string]$alertID = $(throw 'A valid, quote-delimited, SCOM AlertID must be provided for -AlertID.'),
	[string[]]$mailto,
	[string[]]$pageto,
	[switch]$diag
)
Set-PSDebug -Strict

#### Setup Error Handling: ####
$error.clear()
#$erroractionpreference = "SilentlyContinue"
$erroractionpreference = "Inquire"

#### Setup local option variables: ####
## Logging: 
#Remove '$alertID' from the following two log file names to prevent the drive from filling up with diag logs:
$errorLogFile = 'C:\local\logs\SCOMNotifyErr-' + $alertID + '.log'
$diagLogFile = 'C:\local\logs\SCOMNotifyDiag-' + $alertID + '.log'
#$errorLogFile = 'C:\local\logs\SCOMNotifyErr.log'
#$diagLogFile = 'C:\local\logs\SCOMNotifyDiag.log'
## Mail: 
$SMTPHost = "smtp.uvm.edu"
$SMTPPort = 25
$Sender = New-Object System.Net.Mail.MailAddress("OpsMgr@lifeboat.campus.ad.uvm.edu", "Lifeboat OpsMgr Notification")
#If error occured while excuting the script, the recipient for error notification email.
$ErrRecipient = New-Object System.Net.Mail.MailAddress("saa-ad@uvm.edu", "SAA Windows Administration Team")
##Set Culture Info (for knowledgebase article language selection):
$cultureInfo = [System.Globalization.CultureInfo]'en-US'
##Get the FQDN of the local computer (where the script is run)...
$RMS = $env:computername

#### Initialize Global Variables and Objects: ####
## Mail Message Object:
[string] $threadID = ''
$SMTPClient = New-Object System.Net.Mail.smtpClient
$SMTPClient.host = $SMTPHost
$SMTPClient.port = $SMTPPort
##Load SCOM PS Module
if ((get-module | ? {$_.name -eq 'OperationsManager'}) -eq $null) {
	Import-Module OperationsManager -ErrorAction SilentlyContinue -ErrorVariable Err | Out-Null
}
## Management Group Object:
$mg = get-SCOMManagementGroup
##Get Web Console URL
$WebConsoleBaseURL = (get-scomwebaddresssetting | Select-Object -Property WebConsoleUrl).webconsoleurl
#### End Initialize ####


#### Begin Parse Input Parameters: ####
##Get recipients names and email addresses from "-to" array parameter: ##
if ((!$mailTo) -and (!$pageTo)) {
	write-host "An array of name/email address pairs must be provided in either the -mailTo or -pageTo parameter, in the format `@(`'me;my@mail.com`',`'you;you@mail.net`')"
	exit
}
$mailRecips = @()
Foreach ($item in $mailTo) {
	$to = New-Object psobject
	$name = ($item.split(";"))[0]
	$email = ($item.split(";"))[1]
	Add-Member -InputObject $to -MemberType NoteProperty -Name Name -Value $name
	Add-Member -InputObject $to -MemberType NoteProperty -Name Email -Value $email
	$mailRecips += $to
	Remove-Variable to
	Remove-Variable name
	Remove-Variable email
}
$pageRecips = @()
Foreach ($item in $pageTo) {
	$to = New-Object psobject
	$name = ($item.split(";"))[0]
	$email = ($item.split(";"))[1]
	Add-Member -InputObject $to -MemberType NoteProperty -Name Name -Value $name
	Add-Member -InputObject $to -MemberType NoteProperty -Name Email -Value $email
	$pageRecips += $to
	Remove-Variable to
	Remove-Variable name
	Remove-Variable email
}
if ($diag -eq $true) {
	[string] $("mailRecipients:") | Out-File $diagLogFile -Append 
	$mailRecips | Out-File $diagLogFile -Append
	[string] $("pageRecipients:") | Out-File $diagLogFile -Append 
	$pageRecips | Out-File $diagLogFile -Append
}
## Parse "-AlertID" input parameter: ##
$alertID = $alertID.toString()
#remove "{" and "}" around the $alertID if exist
if ($alertID.substring(0,1) -match "{") {
	$alertID = $alertID.substring(1, ( $alertID.length -1 ))
}
if ($alertID.substring(($alertID.length -1), 1) -match "}") {
	$alertID = $alertID.substring(0, ( $alertID.length -1 ))
}
#### End Parse input parameters ####


#### Function Library: ####
function getResStateName($resStateNumber){
	[string] $resStateName = $(get-ScomAlertResolutionState -resolutionStateCode $resStateNumber).name
	$resStateName
}
function setResStateColor($resStateNumber) {
	switch($resStateNumber){
		"0" { $sevColor = "FF0000" }	#Color is Red
		"1" { $sevColor = "FF0000" }	#Color is Red
		"255" { $sevColor = "3300CC" }	#Color is Blue
		default { $sevColor = "FFF00" }	#Color is Yellow
	}
	$sevColor
}
function stripCruft($cruft) {
	#Removes "cruft" data from messages. 
	#Intended to make subject lines and alphanumeric pages easier to read
	$cruft = $cruft.replace("®","")
	$cruft = $cruft.replace("(R)","")
	$cruft = $cruft.replace("Microsoftr ","")
	$cruft = $cruft.replace("Microsoft ","")
	$cruft = $cruft.replace("Microsoft.","")
	$cruft = $cruft.replace("Windows ","")
	$cruft = $cruft.replace(" without Hyper-V","")
	$cruft = $cruft.replace("Serverr","Server")
	$cruft = $cruft.replace(" Standard","")
	$cruft = $cruft.replace(" Enterprise","")
	$cruft = $cruft.replace(" Edition","")
	$cruft = $cruft.replace(".campus","")
	$cruft = $cruft.replace(".CAMPUS","")	
	$cruft = $cruft.replace(".ad.uvm.edu","")
	$cruft = $cruft.replace(".AD.UVM.EDU","")
	$cruft = $cruft.trim()
	return $cruft
}
function fnMamlToHTML($MAMLText){
	$HTMLText = "";
	$HTMLText = $MAMLText -replace ('xmlns:maml="http://schemas.microsoft.com/maml/2004/10"');
	$HTMLText = $HTMLText -replace ("maml:para", "p");
	$HTMLText = $HTMLText -replace ("maml:");
	$HTMLText = $HTMLText -replace (&quot;</section>&quot;);
	$HTMLText = $HTMLText -replace (&quot;<section>&quot;);
	$HTMLText = $HTMLText -replace (&quot;<section>&quot;);
	$HTMLText = $HTMLText -replace (&quot;<title>&quot;, &quot;<h3>&quot;);
	$HTMLText = $HTMLText -replace (&quot;</title>&quot;, &quot;</h3>&quot;);
	$HTMLText = $HTMLText -replace (&quot;&quot;, &quot;<li>&quot;);
	$HTMLText = $HTMLText -replace (&quot;&quot;, &quot;</li>&quot;);
	$HTMLText;
}
function fnTrimHTML($HTMLText){
	$TrimedText = &quot;&quot;;
	$TrimedText = $HTMLText -replace (&quot;&lt;&quot;, &quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;&quot;)
	$TrimedText = $TrimedText -replace (&quot;<h1>&quot;, &quot;<h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;</h1>&quot;, &quot;</h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;<h2>&quot;, &quot;<h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;</h2>&quot;, &quot;</h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;<H1>&quot;, &quot;<h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;</H1>&quot;, &quot;</h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;<H2>&quot;, &quot;<h3>&quot;)
	$TrimedText = $TrimedText -replace (&quot;</H2>&quot;, &quot;</h3>&quot;)
	$TrimedText;
}
function buildEmail {
	## Format the message for full-HTML email
	[string] $escTxt = &quot;&quot;
	if ($resState -eq '1') {$escTxt = '- Repeat Count ' + $escLev.ToString()}
	[string] $script:mailSubj = &quot;SCOM - $resStateName $escTxt - $alertSev | $moPath | $alertName&quot;
	$mailSubj = stripCruft($mailSubj)
	[string] $script:mailErrSubj = &quot;Error emailing SCOM Notification for Alert ID $alertID&quot;
	[string] $webConsoleURL = $WebConsoleBaseURL+&quot;?DisplayMode=Pivot&amp;AlertID=%7b$alertID%7d&quot;
	[string] $psCmd = &quot;Get-SCOMAlert -Id `&quot;$alertID`&quot; | format-list *&quot;
	# Format the Mail Message Body (do not indent this block!)
	$script:MailMessage.isBodyHtml = $true
	$script:mailBody = @&quot;



<p><b>Alert Resolution State:<Font color='$sevColor'> $resStateName </Font></b><br />
<b>Alert Severity:<Font color='$sevColor'> $alertSev</Font></b><br />
<b>Object Source (Display Name):</b> $moSource <br />
<b>Object Path:</b> $moPath <br />
</p>
<p>
<p><b>Alert Name:</b> $alertName <br />
<b>Alert Description:</b> <br />
$alertDesc <br>
&quot;@
	if (($resState -eq 0) -or ($resState -eq 1)) {
		if ($isMonitorAlert -eq $true) {
$script:mailBody = $mailBody + @&quot;
<b>Alert Monitor Name:</b> $MonitorName <br />
<b>Alert Monitor Description:</b> $MonitorDescription
</p>
&quot;@
		}elseif ($isMonitorAlert -eq $false) {
			$script:mailBody = $mailBody + @&quot;
<b>Alert Rule Name:</b> $RuleName <br />
<b>Alert Rule Description:</b> $RuleDescription <br />
&quot;@
		}
	}
$script:mailBody = $mailBody + @&quot;
<b>Alert Context Properties:</b><br /> 
$alertCX <br />
<b>Time Raised:</b> $timeRaised <br />
<b>Alert ID:</b> $alertID <br />
<b>Notification Status:</b> $($alert.CustomField1) </br>
<b>Notification Repeat Count:</b> $($escLev.ToString()) </p>
<p>
<b>PowerShell Alert Retrieval:</b> $psCmd <br />
<b>Web Console Link:</b> <a href="&quot;$webConsoleURL&quot;">$webConsoleURL</a> </p>
&quot;@
	if (($resState -eq 0) -or ($resState -eq 1)) {
		foreach ($article in $arrArticles) {
		$articleContent = $article.content
$script:mailBody = $mailBody + @&quot;
<p>
<b>Knowledge Article / Company Knowledge `-$($article.Language):</b>
<hr>
<p> $articleContent
<hr>
<p>

&quot;@
		}
	}
$script:mailErrBody = @&quot;

<p>Error occurred when excuting script located at $RMS for alert ID $alertID.
<p>
<p>Alert Resolution State: $resStateName
<p>
<p>$error
<p>
<p><b>**Use below command to view the full details of this alert in SCOM Powershell console:</b>
<p>$psCmd
<p>
<p> SCOM link:<a href="&quot;$webConsoleURL&quot;"> $webConsoleURL </a>
 

&quot;@ 
}
function buildPage {
	## Format the message for primitive alpha-numeric pager
	$script:moPath = stripCruft($moPath)
	[string] $escTxt = ''
	if ($resState -eq '1') {$escTxt = '- Rep Count ' +$escLev.ToString()}
	[string] $script:mailSubj = &quot;SCOM - $resStateName $escTxt | $moPath&quot;
	[string] $script:mailErrSubj = &quot;Error emailing SCOM Notification for Alert ID $alertID&quot;
	#UFT8 makes the message body look like trash.  Use ASCII (the default) instead.
	#$mailMessage.BodyEncoding =  [System.Text.Encoding]::UTF8 
	$script:MailMessage.isBodyHtml = $false
	$script:moSource = stripCruft($moSource)
	$script:alertName = stripCruft($alertName)
	$script:mailBody = &quot;| $moSource | $alertName | $timeRaised&quot; 
	$script:mailBody = stripCruft($mailBody)
}
function buildHeaders {
	param(
		[array]$recips
	)
	## Complete the MailMessage object:
	$script:MailMessage.Sender = $Sender
	$script:MailMessage.From = $Sender
	$script:MailMessage.Headers.Add('references',$threadID)
	# Regular (non-error) format
	if ($error.count -eq &quot;0&quot;) { 				
		$script:MailMessage.Subject = $mailSubj
		Foreach ($item in $recips) {
			$to = New-Object System.Net.Mail.MailAddress($item.email, $item.name)
			$script:MailMessage.To.add($to)
			Remove-Variable to
		}
		$script:MailMessage.Body = $mailBody
	} 
	# Error format:
	else {									
		$script:MailMessage.Subject = $mailErrSubj
		$script:MailMessage.To.add($ErrRecipient)
		$script:MailMessage.Body = $mailErrBody
	}
	## Log the message if in diag mode:
	if ($diag -eq $true) {
		[string] $('Mail Message Object Content:') | Out-File $diagLogFile -Append
		$mailMessage | fl * | Out-File $diagLogFile -Append
	}
}
#### End Function Library ####


#### Clean up existing logs: ####
if (Test-Path $errorLogFile) {Remove-Item $errorLogFile -Force}
if (Test-Path $diagLogFile) {Remove-Item $diagLogFile -Force}
if ($diag -eq $true) {
	[string] $(&quot;AlertID : `t&quot; + $alertID) | Out-File $diagLogFile -Append
	[string] $(&quot;MailTo      : `t&quot; + $mailto) | Out-File $diagLogFile -Append
	[string] $(&quot;PageTo      : `t&quot; + $pageto) | Out-File $diagLogFile -Append
	#[string] $(&quot;Format  : `t&quot; + $format) | Out-File $diagLogFile -Append
}



#### Begin Alert Handling: ####
## Locate the specific alert:
$alert = Get-SCOMAlert -Id $alertID
if ($diag -eq $true) {
	[string] $('SCOM Alert Object Content:') | Out-File $diagLogFile -Append
	$alert | fl | Out-File $diagLogFile -Append
}
## Read Alert Informaiton:
[string] $alertName = $alert.Name
[string] $alertDesc = $alert.Description
#[string] $alertPN = $alert.principalName
[string] $moSource = $alert.monitoringObjectDisplayName 	# Display name is &quot;Path&quot; in OpsMgr Console.
[string] $moId = $alert.monitoringObjectID.tostring()
#[string] $moName = $alert.MonitoringObjectName 			# Formerly &quot;strAgentName&quot;
[string] $moPath = $alert.MonitoringObjectPath 				# Formerly &quot;pathName
#[string] $moFullName = $alert.MonitoringObjectFullName 	# Formerly &quot;alertFullName&quot;
[string] $ruleID = $alert.MonitoringRuleId.Tostring()
[string] $resState = ($alert.resolutionstate).ToString()
[string] $resStateName = getResStateName $resState
[string] $alertSev = $alert.Severity.ToString() 			# Formerly &quot;severity&quot;
if ($alertSev.ToLower() -match &quot;error&quot;) {
	$alertSev = &quot;Critical&quot; 									# Rename Severity to &quot;Critical&quot;
}
[string] $sevColor = setResStateColor $resState				# Assign color to alert severity
#$problemID = $alert.ProblemId
$alertCx = $(1($alert.Context)).DataItem.Property `
	| Select-Object -Property Name,'#text' `
	| ConvertTo-Html -Fragment								# Alert Context property data, in HTML
$localTimeRaised = ($alert.timeraised).tolocaltime()
[string] $timeRaised = get-date $localTimeRaised -Format &quot;MMM d, h:mm tt&quot;
[bool] $isMonitorAlert = $alert.IsMonitorAlert
$escLev = 1
if ($alert.CustomField2) {
	[int] $escLev = $alert.CustomField2
}
## Lookup available Knowledge articles, if new alert:
if (($resState -eq 0) -or ($resState -eq 1)) {
	$articles = $mg.Knowledge.GetKnowledgeArticles($ruleId)
	
	if (!$error) {	#no point retrieving the monitoring rule when there's error processing the alert
		#if failed to get knowledge article, remove the error from $error because not every rule and monitor will have knowledge articles.
		if ($isMonitorAlert -eq $false) {
			$rule = Get-SCOMRule -Id $ruleID		
			$ruleName = $rule.DisplayName
			$ruleDescription = $rule.Description
			if ($RuleDescription.Length -lt 1) {$RuleDescription = &quot;None&quot;}
		} elseif ($isMonitorAlert) {
			$monitor = Get-SCOMMonitor -Id $ruleID
			$monitorName = $monitor.DisplayName
			$monitorDescription = $monitor.Description
			if ($monitorDescription.Length -lt 1) {$monitorDescription = &quot;None&quot;}
		}
		#Convert Knowledge articles
		$arrArticles = @()
		Foreach ($article in $articles) {
			If ($article.Visible) {
				$LanguageCode = $article.LanguageCode
				#Retrieve and format article content
				$MamlText = $null
				$HtmlText = $null
				if ($article.MamlContent -ne $null) {
					$MamlText = $article.MamlContent
					$articleContent = fnMamlToHtml($MamlText)
				}
					
				if ($article.HtmlContent -ne $null) {
					$HtmlText = $article.HtmlContent
					$articleContent = fnTrimHTML($HtmlText)
				}
				$objArticle = New-Object psobject
				Add-Member -InputObject $objArticle -MemberType NoteProperty -Name Content -Value $articleContent
				Add-Member -InputObject $objArticle -MemberType NoteProperty -Name Language -Value $LanguageCode
				$arrArticles += $objArticle
				Remove-Variable LanguageCode, articleContent
			}
		}	
	}
	if ($Articles -eq $null) {
		$articleContent = &quot;No resolutions were found for this alert.&quot;
	}
}
## End Knowledge Article Lookup
#### End Alert Handling ####



#### Begin Mail Processes:
if ($mailto) {
	# For all alerts, send full HTML email:
	$MailMessage = New-Object System.Net.Mail.MailMessage
	buildEmail
	buildHeaders -recips $mailRecips
	invoke-command -ScriptBlock {$SMTPClient.Send($MailMessage)} -errorVariable smtpRet
}
if ($pageTo) {
	# For page-worthy alerts, format short message and send:
	$MailMessage = New-Object System.Net.Mail.MailMessage
	buildPage
	buildHeaders -recips $pageRecips
	invoke-command -ScriptBlock {$SMTPClient.Send($MailMessage)} -errorVariable smtpRet
}
#### End Mail Message Formatting #### 


# Populate CustomField1 and 2 to indicate that a notification has been sent, with repeat count.
if (!$smtpRet) { 							# IF the message was sent (apparently)...
	[string] $updateReason = &quot;Updated by Email notification script.&quot;
	[string] $custVal1 = &quot;notified&quot;
	if ($resState -eq &quot;0&quot;) { 				# . AND IF this is a &quot;new&quot; alert...
		$alert.ResolutionState = 1			# ..Set the resolution state to &quot;Notified&quot;
		$alert.CustomField2 = $escLev		# ..Set CustomField2 to the current notification retry count (presumably 1)
		if (!$alert.CustomField1) {			# ..AND if CustomField1 is not already defined...
			$alert.CustomField1 = $custVal1	# ... Set CustomField1.
		}
		$alert.Update($updateReason)
	} 
	elseif ($resState -eq &quot;1&quot;) {		# .Or,If this is a &quot;notified&quot; alert
		if ($alert.CustomField2) {		# ..and the notification retry count exists..
			$escLev += 1				# ...Increment by one.
		}
		$alert.CustomField2 = $escLev
		$alert.Update($updateReason)
	}
}



Write-Host $error
##Make sure the script is closed
if ($error.count -ne &quot;0&quot;) {
	[string]$('AlertID string: ' + $alertID) | Out-File $errorLogFile
	[string]$('Alert Object Content: ') | Out-File $errorLogFile
	$alert | Format-List * | Out-File $errorLogFile
	[string]$('Error Object contents:') | Out-File $errorLogFile
	$Error | Out-File $errorLogFile
}
#Remove-Variable alert
#Remove-Module OperationsManager

Coping with Renamed user Accounts in sharepoint

Yesterday I received a strange error report from a person trying to create a new SharePoint site collection.  Our front line guy went to investigate and found that she was getting a “User cannot be found” error out of SharePoint when attempting to complete the self-service site creation process.  This person reported that her last name changed recently, along with her user ID, yet SharePoint will still showing her as logged in under her old name.

Linking the “Correlation ID” up to the diagnostic logs was of no great help.  The diagnostic logs simply reported “User cannot be found” when executing the method “Microsoft.SharePoint.SPSite.SelfServiceCreateSite”.  We are able to see that “ownerLogin”, “ownerEmail”, and “ownerName” strings were being passed to this function, but not what the values of those strings were.  I guessed that the web form was passing the person’s old account login name to the function, and that since this data was no longer valid, an error was getting displayed.  But how to fix this?

SharePoint 2010 (and WSS 3.0 before it) keeps a list of Site Users that can be accessed using the SharePoint Web “SiteUsers” property. This list is updated every time a new user logs in to the site.  The list entries contain username, login identity, email address, and security ID (SID) data.  It also appears that Site User data is not updated when user data changes in Active Directory (as long as the SID stays the same, that is).  Additional user account data is stored in XML data in the SharePoint databases, and can be accessed using the SharePoint Web “SiteUserInfoList” property.  All of this data needs to be purged from the root web site so that our hapless user can once again pass valid data to the SelfServiceCreateSite method.

Presumably the Site Management tools could be forced to get the job done, but the default views under SharePoint 2010 are hiding all site users from me, even when I log in as a site administrator.  Let’s try PowerShell instead:

add-pssnapin microsoft.sharepoint.powershell 
$root = get-spweb -identity "https://sharepoint.uvm.edu/" 

# "Old ID" below should be all or part of the user's original login name: 
$oldAcc = $root.SiteUsers | ? {$_.userLogin -match "oldID"} 
#Let's see if we found something: 
$oldAcc.LoginName 

#Remove the user from the web's SiteUsers list: 
$root.SiteUsers.Remove($oldAcc.LoginName) 
$root.Update() 
#Let's see if it worked: 
$id = $oldAcc.ID 
$root = get-spweb -identity "https://sharepoint.uvm.edu/" 
$root.SiteUsers.GetByID($id) 
# (This should return a "User cannot be found" error.) 

#Now to see what is in SiteUserInfoList: 
$root.SiteUserInfoList.GetItemById($id) 
# (This data can be cleaned up in the browser by visiting:
# " /_catalogs/users/simple.aspx" 
# from your site collection page.)

Moving User Profiles with PowerShell

Something that comes up with some frequency on Terminal Servers (or “Remote Desktop Servers”), but perhaps sometimes in VDI, is “How to I move a user profile from one drive to another”. The traditional answers include the use of the user profile management GUI, or some expensive piece of software. But what if you need to automate the job? Or if you don’t have any money for the project?

Answer? PowerShell, of course… and robocopy.

Below is a code snippet that will set existing user profiles to load from “C:\Users” to “E:\Users”:

#Collect profile reg keys for regular users ("S-1-5-21" excludes local admin, network service, and system)
$profiles = gci -LiteralPath "HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList" `
	| ? {$_.name -match "S-1-5-21-"} 

foreach ($profile in $profiles) {
	#Set the registry path in a format that can be used by the annoyingly demanding "get-itemproperty" cmdlet:
	$regPath = $(
		$($profile.pspath.tostring().split("::") | Select-Object -Last 1).Replace("HKEY_LOCAL_MACHINE","HKLM:")
	)
	
	#Get the current filesystem path for the user profile, using get-ItemProperty"
	$oldPath = $(
		Get-ItemProperty -LiteralPath $regPath -name ProfileImagePath
	).ProfileImagePath.tostring()
	
	#Set a varialble for the new profile filesystem path:
	$newPath = $oldPath.Replace("C:\","E:\")
	
	#Set the new profile path using "set-itemproperty"
	Set-ItemProperty -LiteralPath $regPath -Name ProfileImagePath -Value $newPath
} 

#Now copy the profile filesystem directories using "robocopy".

But this code will not actually move the data. For that, we need robocopy. Make sure that your users are logged off before performing this operation, otherwise “NTUSER.DAT” will not get moved, and your users will get a new TEMP profile on next login:

robocopy /e /copyall /r:0 /mt:4 /b /nfl /xj /xjd /xjf C:\users e:\Users

Finally, be sure to set the default location for new profiles and the “Public” directory to your new drive as well. For that, run “Regedit”, then go to:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList
and set new paths for the registry strings “ProfilesDirectory” and “Public”. Moving the default user profile is optional.

Oh yeah… you might want to purge the old Recycle Bin cruft for your moved users as well:

rmdir /s /q C:\$Recycle.Bin

SharePoint 2010 – Email Alerts to Site Administrators

We are in the final stages of preparation for the long-overdue upgrade to SharePoint 2010.  I have set up a preview site with a copy of the production SharePoint content database, and I want to notify all site owners that they should check out their sites for major problems.  How to do?  PowerShell?  Absolutely!


Set-PSDebug -Strict
Add-PSSnapin -Name microsoft.SharePoint.PowerShell

[string] $waUrl = "https://sharepoint2010.uvm.edu"
[string] $SmtpServer = "smtp.uvm.edu"
[string] $From = "saa-ad@uvm.edu"

$allAdmins = @()

[string] $subjTemplate = 'Pending Upgrade for your site -siteURL-'
[string] $bodyTemplate = @"
Message Body Goes Here.
Use the string -siteURL- in the body where you want the user's site address to appear.
"@

$wa = Get-SPWebApplication -Identity $waUrl

foreach ($site in $wa.sites) {
	#Write-Host "Working with site: " + $site.url
	$siteAdmins = @()
	$siteAdmins = $site.RootWeb.SiteAdministrators
	ForEach ($admin in $siteAdmins) {
		#Write-Host "Adding Admin: " + $admin.UserLogin
		[string]$a = $($admin.UserLogin).Replace("CAMPUS\","")
		[string]$a = $a.replace(".adm","")
		[string]$a = $a.replace("-admin","")
		[string]$a = $a.replace("admin-","")
		if ($a -notmatch "sa_|\\system") { $allAdmins += , @($a; [string]$site.Url) }
	}
	$site.Dispose()
}

$allAdmins = $allAdmins | Sort-Object -Unique
#$allAdmins = $allAdmins | ? {$_[0] -match "jgm"} | Select-Object -Last 4

foreach ($admin in $allAdmins) {
	[string] $to = $admin[0] + "@uvm.edu"
	[string] $siteUrl = $admin[1]
	[string] $subj = $subjTemplate.Replace("-siteURL-",$siteUrl)
	[string] $body = $bodyTemplate.Replace("-siteURL-",$siteUrl)
	Send-MailMessage -To $to -From $From -SmtpServer $SmtpServer -Subject $subj -BodyAsHtml $body
}

vSphere 5.1 – Train Wreck in Slow Motion

vSphere 5.1 arrived this summer to no great fan-fare. We waited a few weeks, heard no sounds of howling pain (we did not listen very hard, I guess), and decided to proceed with upgrading vCenter.  I have been digging out of the wreckage ever since.

How do you know if upgrading to vSphere 5.1 is right for you?  Here are a few bullet points to help you decide:

  • Do you have CA-signed (externally trusted, or in-house Enterprise CA server) certificates in use in your current vSphere environment?
  • Are you using an external MS SQL Server to host your vCenter database?  Are you using mirrored SQL databases?
  • Is your environment currently stable and reliable?

Is you answered “yes” to any of these questions, do not upgrade to vSphere 5.1.  At least, not yet. Do deceive yourself that that the vSphere 5.1.0a release will be any help, either.

What is the big problem, you ask?  The major source of pain in this release is the new “Single Sign-On Service” that handles authentication and authorization for all of the other vSphere components.  This component of vSphere has twitchy SSL certificate requirements that are poorly documented by VMware.  The SSL requirements are so touchy that in our case, even the self-signed certs generated by the installer did not work.  Unlike all of the other current vSphere components, it does not support mirrored SQL databases.  It has new permissions requirements in AD that are not documented at all, and at the time of our installation, did not even have a KB entry.  The installer is very buggy, most notably in that it requests that you set and admin password for the SSO Service, and demands password complexity, but it does not inform you when your password is unacceptably long (i.e. longer than 32 characters) or when your password contains illegal characters (i.e. most regular expression special characters).

So, if you do upgrade, be prepared for an extended service outage.  Give yourself a long service window.  Have your VMware support contract numbers handy.  Familiarize yourself with the myriad of locations that are used to log vCenter data.  Learn to use PowerShell (get-childitem -recurse | select-string -pattern “configSettingThatThevCenterInstallerBorkedUp”) and keep this page bookmarked:

http://derek858.blogspot.com/2012/09/vmware-vcenter-51-installation-part-1.html

Here are UVM we are indebted to Derek Seaman for his thorough documentation of the vSphere 5.1 installation process and detailed SSL certificate generation instructions.

Following are some installation quirks that we encountered, presented mainly for my own reference, but maybe you will find them useful as well:

  1. “Performance Charts Experienced an Internal Error” seen in the vSphere client after the upgrade:
    This happened because vCenter Web Services did not read the database mirroring configuration from our defined ODBC data sources… it grabbed the primary database only, and not the mirror data.  The fix?  Edit:
    “%ProgramData%\VMware\VMware VirtualCenter\vcdb.properties”
    Find the “url=” line, and append:
    ;failoverPartner\=[mirrorServer]
    (Where [mirrorServer] the the actual DB mirror host name.  Don’t forget the “\” before the “=”.)
  2. Some users with permissions to vCenter 5.0 cannot log in after the upgrade.  In the vSphere web client, these users are marked as “disabled”:
    This occurred for use for two reasons:

    1. The SSO Service installer prompts us for a service account to use during install.  Following installation, the service is seen to be running as “SYSTEM”, and not the specified service account.  Change the Service to run with your planned service account using services.msc after the installation.  As an alternative, you could specify those credentials  in the vSphere Web Client -> Administration ->Sign-On and Discovery -> Configuration -> Identity Sources.  Edit your identity source, and under “Authentication Source” select “password”, then enter your service account credentials.
    2. The SSO Service needs to read account attributes that cannot be read by a standard user account (at least, not in an AD forest at a Server 2008 R2 functional level).  When we asked VMware support to define the required permissions, they replied: “an account has to have at least read-only permissions over the user and group Organization Units furthermore read permissions also on the properties of the users, such as UserAccessControl.”  After some experimentation, I just gave the SSO Service account “read all properties” rights to the account OU, and login abilities were restored.
  3. Our SSO Service broke when the mirrored database servers that we currently use for vCenter services had a failover event.  During install, I used the standard “failoverPartner=” JDBC connection string property to specify our failover database server.  Unfortunately, the SSO service ignores this property.  I could not identify an acceptable workaround for this problem. Ultimately, I installed a SQL Express instance on our vCenter server to house just the SSO database.  I tried:
    1. Using SQL Aliases, but this failed because the JDBC driver is not aware of SQL Aliases.
    2. Using a script that edits the local “hosts” file on a database failover event.  I then used this host name alias for the database connections.  This almost worked.  I edited the following files to use the host alias, instead of the actual database server host name:
      %ProgramFiles%\VMware\Infrastructure\SSOServer\webapps\ims\WEB-INF\classes\jndi.properties
      and:
      %ProgramFiles%\VMware\Infrastructure\SSOServer\webapps\lookupservice\WEB-INF\classes\config.properties
      Upon restart, the SSO Service was able to connect to the database, but it did not survive a failover.  Apparently the old database connection information was still in use somewhere, and VMware support was not helpful in identifying all of the database configuration locations for SSO.
    3. While VMware does have command line configuration tools that could have been used to script reconfiguration of the database connection strings, I have deemed that they are too fragile for production use.
  4. The option to authenticate using Windows session credentials in the vSphere Client (traditional version) stopped working after the 5.1 upgrade.  This is a bug that is fixed with the 5.1.0a release.  Unfortunately, the SSO installer for 5.1.0a does not work in upgrade mode.  Aargh!  I had to uninstall the SSO service to get the updated files into place.  Guess what the uninstaller does?  That’s right… it erases the SSO Service database (drops all tables!  Gah!), and deletes all configuration files for the service.  Before you upgrade, make sure that you have an SSO Service backup bundle.  I did, but it was outdated.  I had to re-register all of the vCenter components with SSO manually, which was a pain in the butt.
  5. vSphere Update Manager registered with vCenter using the wrong DNS name.  We could not scan ESXi hosts for updates, because vCenter was telling them to connect to an invalid URL.  To fix, I needed to search the registry for the incorrect host name, and replace with the correct one:
    “HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\VMware, Inc.\VMware Update Manager\VUMServer”
    For good measure I also edited:
    %Program Files(x86)%\VMware\Infrastructure\Update Manager\extension.xml
    To contain the correct host name.  Then we restart the Update Manager services, and we are back in business.
  6. Other fun related to VMware Update Manager… the SQL Account used by Update Manager cannot have a password that exceeds 24 characters in length. Special characters in the SQL Account password also may cause problems.

So, VMware is not my favorite company this month.  On to solve more problems.  We still cannot add new permissions to vCenter, and Performance Charts are loading like a slug in winter.

Windows Backup Performance Testing with PowerShell

While developing our new Windows file services infrastructure, we wanted to test our pre-production platform to see if there are any file server-side bottlenecks that will cause unacceptable delays in backup processing. Here are UVM we still are using EMC Networker for Enterprise backup (no comments on our satisfaction with EMC will be provided at this time). EMC provides a tool “uasm.exe” that is used at the core of the “save.exe” and “recover.exe” commands on the backup client. If we use “uasm.exe” to backup all of the file server data to “null”, it is possible that we will be able to detect disk, HBA, and other local I/O bottlenecks before they bite us in production.

Since Networker will break up our file server into multiple “save sets”, and run a user-definable number of save set backup processes in parallel, it also is important for us to determine the required number of parallel backup processes required to complete backup in a timely fashion. Thus, we want to run several parallel “uasm.exe” processes in our tests.

PowerShell, with the assistance of “cmd.exe”, and some finesses, can get this job done. Hurdles I ran into while scripting this test follow:

  1. During development, PowerShell consumed huge amounts of CPU while redirecting uasm.exe output to the PowerShell $null object. Interestingly, previous tests using uasm.exe with cmd.exe did not show this problem. To fix this, each uasm job is spawned from a one-line cmd.exe “bat” script, which is included below.
  2. Remember that PowerShell uses the null object “$null”, but that cmd.exe uses the handle “nul” (with one “L”). If you redirect to “null”, you will soon fill up your disk with a file named “null”.
  3. When wanted to examine running jobs, it was difficult to determine which directory a jobs was working on. This was because I initially created a scriptblock object and passed parameters to it when starting a job. For example:
    [scriptblock] $sb = {
    $uasmBlock = {
    	param ([string]$sPath)
    	[string[]] $argList = '/c','c:\local\scripts\uasm_cmd.bat',$sPath
    	& cmd.exe $argList
    }
    $jobs += start-job -Name $myJob -ScriptBlock $sb -ArgumentList $dir1
    

    However, when inspecting the job object’s “command” property, we see “$sPath” in the output. We want the variable expanded. How to do this? Create the scriptblock object in-line when starting the job:

    [string] $cmd = '& cmd.exe "/c","c:\local\scripts\uasm_cmd.bat",' + $dir
    $jobs += Start-Job -Name $jobName `
    	-ScriptBlock ([scriptblock]::create($cmd))
    

    This makes for more compact code, too.

  4. To check on jobs that have completed, I create an array named “$djs” (Done Jobs), populated by piping the $jobs array and filtering for “completed” jobs. I inspect $djs to see if jobs are present. In my first pass, I used the check:
    if ($djs.count -gt 0)

    Meaning, continue if there is anything in the array $djs. However, this check did not work well because output from the $jobs object would put a null item in $djs on creation, meaning that if there were no running jobs, $djs would still have a count of one! I fixed this by changing the test:

    if ($djs[0] -ne $null)

    Meaning, if the first entry in $djs is not a null object, then proceed.

The full script follows:

#uasm_jobQueue.ps1, 2011-09-30, author: J. Greg Mackinnon
#Tests performance of disk when accessed by Networker backup commands.
#   Creates a queue of directories to test ($q), then uses external command 
#   "uasm.exe" to backup these directories to null.
#Change the "$wp" variable to set the number of uasm 'worker processes' to be 
#   used during the test.
#Note: PowerShell $null object causes very high CPU utilization when used for
#   this purpose.  Instead, we call "uasm_cmd.bat" which uses the CMD.exe 'nul'
#   re-director.  'nul' does not have the same problems as $null.

set-psdebug -strict

[int] $wp = 4

# Initialize the log file:
[string] $logfile = "s:\uasm_test.log"
remove-item $logfile -Force
[datetime] $startTime = Get-Date
[string] "Start Time: " + $startTime | Out-File $logfile -Append

##Create work queue array:
# Add shared directories:
[String[]] $q = gci S:\shared | ? {$_.Attributes.tostring() -match "Directory"}`
	| sort-object -Property Name | % {$_.FullName}
# Add remaining targets to queue:
$q += 'H:\','I:\','J:\','K:\','L:\','M:\','S:\sis\','S:\software\','s:\r25\'
	
[int] $dc = 0			#Count of completed (done) jobs.
[int] $qc = $q.Count	#Initial count of jobs in the queue
[int] $qi = 0			#Queue Index - current location in queue
[int] $jc = 0			#Job count - number of running jobs
$jobs = @()				#Jobs array - intended to contain running PS jobs.
	
while ($dc -lt $qc) { # Completed jobs is less than total jobs in queue
	# Keep running jobs until completed jobs is less than total jobs in queue, 
	#  and our queue count is less than the current queue index.
	while (($jobs.count -lt $wp) -and ($qc -gt $qi)) { 
		[string] $jobName = 'qJob_' + $qi + '_';
		[string] $dir = '"' + $q[$qi] + '"'
		[string] $cmd = '& cmd.exe "/c","c:\local\scripts\uasm_cmd.bat",' + $dir
		#Start the job defined in $cmd string.  Use this rather than a pre-
		#  defined scriptblock object because this allows us to see the expanded
		#  job command string when debugging. (i.e. $jobs[0].command)
		$jobs += Start-Job -Name $jobName `
			-ScriptBlock ([scriptblock]::create($cmd))
		$qi++ #Increment the queue index.
	}
	$djs = @(); #Completed jobs array
	$djs += $jobs | ? {$_.State -eq "Completed"} ;
	# $djs array will always have a count of at least 1.  However, if the 
	#    first entry is not empty (null), then there must be completed jobs to
	#    be retrieved.
	if ($djs[0] -ne $null) { 
		$dc += $djs.count;
		$djs | Receive-Job | Out-File $logfile -Append; #Log completed jobs
		$djs | Remove-Job -Force;
		Remove-Variable djs;
		$jobs = @($jobs | ? {$_.State -eq "Running"}); #rebuild jobs array.
	}
	Start-Sleep -Seconds 3
}


# Complete logging:
[datetime] $endTime = Get-Date
[string] "End Time: " + $endTime | Out-File $logfile -Append 
$elapsedTime = $endTime - $startTime
[string] $outstr =  "Elapsed Time: " + [math]::floor($elapsedTime.TotalHours)`
	+ " hours, " + $elapsedTime.minutes + " minutes, " + $elapsedTime.seconds`
	+ " seconds."
$outstr | out-file -Append $logfile

The “uasm_cmd.bat” file called in the above code block contains the following one line:

"c:\program files\legato\nsr\bin\uasm.exe" -s %1 > nul

Migrating from NetApp to Windows File Servers with PowerShell – part 2

Previously we saw how PowerShell and RoboCopy can be used to sync multi-terabyte file shares from NetApp to Windows. What I did not tell you was that this script choked and died horribly on a single share in our infrastructure. You may have seen it commented out in the previous script? “#,’R25′”?

CollegeNet Resource25… my old enemy. These clowns worked around a bug in their product (an inability to read an open text column in an Oracle DB table) by copying every text row in the database to its own file on a file server, and to make matters worse they copy all of the files to the same directory. Why is this bad? Ever try to get a directory listing on a directory with 480,000 1k files? It’s bad news. Worse, it kills robocopy. Fortunately, we have a workaround.

The archive utility “7-zip” is able to wrap up the nasty directory into a single small file, which we then can unpack on the new file server. Not familiar with 7-Zip? For shame! Get it now, it’s free:
http://www.7-zip.org/

7-zip ignores most file attributes, which seems to speed up the copy process a bit. Using robocopy, ouy sync operation would either run for hours on this single directory, or just hang up forever. With 7-zip, we get the job done in 30 minutes. Still slow, but better than never.

Troublesome files are found in the R25 “text_comments” directory, a subdirectory of “text”. We have prod, pre-prod, and test environments, and so need to do a few separate 7-zip archives. Note that a little compresson goes a long way here. When using “tar” archives, my archive was several gb in size. With the lowest level of compression, we squeeze down to only about 14 Mb. How is this possible? Well, a lot of our text comment files were empty, but uncompressed they still take up one block of storage. Over 480,000 blocks, this really adds up.

Code snippet follows.

#Sync R25 problem dirs

Set-PSDebug -Strict

# Initialize the log file:
[string] $logfile = "s:\r25Sync.log"
remove-item $logfile -Force
[datetime] $startTime = Get-Date
[string] "Start Time: " + $startTime | Out-File $logfile -Append

function zipit {
	param ([string]$source)
	[string] $cmd = "c:\local\bin\7za.exe"
	[string] $arg1 = "a" #add (to archive) mode
	[string] $arg2 = join-path -Path $Env:TEMP -ChildPath $($($source | `
		Split-Path -Leaf) + ".7z") # filespec for archive
	[string] $arg3 = $source #spec for source directory
	[string] $arg4 = "-mx=1" #compression level... minimal for performance
	#[string] $arg4 = "-mtm=on" #timestamp preservation - commented out for perf.
	#[string] $arg5 = "-mtc=on"
	#[string] $arg6 = "-mta=on"
	#invoke command, route output to null for performance.
	& $cmd $arg1,$arg2,$arg3,$arg4 > $null 
}

function unzipit {
	param ([string]$dest)
	[string] $cmd = "c:\local\bin\7za.exe"
	[string] $arg1 = "x" #extract archive mode
	[string] $arg2 = join-path -Path $Env:TEMP -ChildPath $($($dest | `
		Split-Path -Leaf) + ".7z")
	[string] $arg3 = "-aoa" #overwrite existing files
	#destination directory specification:
	[string] $arg4 = '-o"' + $(split-path -Parent $dest) + '"' 
	#invoke command, route to null for performance:
	& $cmd $arg1,$arg2,$arg3,$arg4 > $null 
	Remove-Item $arg2 -Force # delete archive
}

[String[]] $zips = "V3.3","V3.3.1","PROD\WinXp\Text"
[string] $sourceD = "\\files\r25"
[string] $destD = "s:\r25"

foreach ($zip in $zips) {
	Get-Date | Out-File $logfile -Append 
	[string] "Compressing directory: " + $zip | Out-File $logfile -Append 
	zipIt $(join-path -Path $sourceD -ChildPath $zip)
	Get-Date | Out-File $logfile -Append 
	[string] "Uncompressing to:" + $destD | Out-File $logfile -Append
	unzipit $(Join-Path -Path $destD -ChildPath $zip)
}

Get-Date | Out-File $logfile -Append 
[string] "Syncing remaining files using Robocopy..." | Out-File $logfile -Append
$xd1 = "\\files\r25\V3.3" 
$xd2 = "\\files\r25\V3.3.1" 
$xd3 = "\\files\r25\PROD\WinXP\text"
$xd4 = "\\files\r25\~snapshot"
$roboArgs = @("/e","/copy:datso","/purge","/nfl","/ndl","/np","/r:0","/mt:4",`
	"/b",$sourceD,$destD,"/xd",$xd1,$xd2,$xd3,$xd4)

& robocopy.exe $roboArgs

Get-Date | Out-File $logfile -Append 
[string] "Done with Robocopy..." | Out-File $logfile -Append

# Complete logging:
[datetime] $endTime = Get-Date
[string] "End Time: " + $endTime | Out-File $logfile -Append 
$elapsedTime = $endTime - $startTime
[string] $outstr =  "Elapsed Time: " + [math]::floor($elapsedTime.TotalHours)`
	+ " hours, " + $elapsedTime.minutes + " minutes, " + $elapsedTime.seconds`
	+ " seconds."
$outstr | out-file -Append $logfile