Category Archives: Uncategorized

Manifests and Digital Signatures for Self Extracting Scripts

It has been quite in this little corner of the blogosphere lately. Must be because I am not doing anything, right?

Wrong. I have run out of time to blog. But today I will make an exception because I need to provide an update to an old post:

As a substitute for becoming a real programmer, I have for years been writing VBScripts and wrapping them up with the Z-Zip Self-Extracting executable. After the release of Windows 8, this model became more difficult. Out-of-box, the 7-Zip self extractor started generating application compatibility troubleshooter pop-ups on clients. Even prior to that, clients would get warnings asking them “do you really want to execute this scary unsigned possibly-from-a-murdering-hacker” when they launched our executables.

The solution for this is, of course, to add an application manifest to the self-extractor, and then to digitally sign the resulting executable. Easy, right?

I actually did this a few years ago for our venerable Wi-Fi profile installation tool. It was not quite easy, and unfortunately I never did get the process fully automated. The roadblock was in automating the addition of a manifest to the application. Microsoft’s tool for this, “mt.exe”, from the Windows SDK, consistently corrupts my executables. Others in the blogosphere have identified the tool “Resource Hacker” to fill this need:

I added this tool to my ugly-old script packaging batch files, and had good success with eliminating the program compatibility dialogs:

..\bin\resource_hacker\ResourceHacker.exe -addoverwrite %fname%.exe, %fname%.exe, %fname%.manifest, 24,1,

I also was able to streamline the signing process with the following batch code:

set fname=fixThunderbirdMailboxPath
set SDKPath="C:\Program Files (x86)\Windows Kits\10\bin\x86"
set TimeStampURL=""
set /P CertPath="Enter the full path to the PKCS12/PFX signing certificate:"
set /P CertPass="Enter the password for certificate file:"
%SDKPath%\signtool.exe sign /f "%CertPath%" /p "%CertPass%" /t %TimeStampUrl% /v %fname%.exe

Renewing SSL Certificates on ADFS 3.0 (Server 2012 R2)

I recently had to replace the public-facing service communication certificates on our primary ADFS deployment on Server 2012 R2. I followed a procedure that I thought had a reasonable chance of actually doing what I wanted it to:

  1. Obtained a new private key with signed certificate.
  2. Saved the file to a pfx, and imported it onto each node in the ADFS cluster
  3. Set permissions on the certificate according to documentation
  4. Used the ADFS MMC -> Certificates -> Set Service Communications Certificate.

Everything seemed to go okay, but after a bit we started to get some complaints that some of our users could not access the Office 365 Pro Plus software download page. This was a curiosity to me, because I could not reproduce the problem. A colleague later noticed a raft of SSL errors in the System event log on one of the ADFS nodes, and disabled it in the load balancer configuration.

When I finally got around to investigating, I noticed that the system log reported problems from source ‘HTTPEvent’, with details DeviceObject: \Device\Http\ReqQueue, Endpoint: (and also Endpoint: What gives?

I found a related TechNet Blog that shed some light on the subject:

According to this document, after setting the Service Communications Certificate in the MMC, you must run:
to fetch the certificate thumbprint of the Service Communications Cert. Take note of the certificate thumbprint, then run:
Set-ADFSSslCertificate -Thumbprint [yourThumbprint]

“Set-AdfsSslCertificate” will fix the HTTP.SYS bindings used by ADFS. Apparently the MMC does not set the bindings, which is pretty annoying because this leaves the service in a pretty darn broken state. The HTTP bindings are mentioned in this TechNet documentation:
BUT, the docs do not explicitly state that the Set-AdfsSslCertificate cmdlet needs to be run on all of the ADFS server nodes in your farm. This also is a key missing detail.

Good Documentation… you always take it for granted, until you don’t have it anymore.

Note above that I mentioned a binding problem with the address This was a carry-over from our initial deployment of ADFS 3. Back then, Microsoft did not provide a health check URL for ADFS, and the supplemental binding was needed to allow health monitor connections from our F5 load balancer without using SNI, which is required by ADFS 3.0, but not supported on the F5. These days (and if you have KB2975719 installed), you can instead monitor the following URL from your F5:
More details on this solution can be found here:

While it is nice having a proper health check, problems can arise when your ADFS server HTTP bindings go sour. It would seem that nothing is perfect.

Ping plotting with PowerShell

Waaaay back we used to use a spiffy little tool called “ping plotter” to discover vacant IP addresses on our subnets. I had not had to do an exhaustive study of this for awhile. When it came up again today, I thought “I’ll bet we can do that with two lines of PowerShell.” But I was wrong… it took three lines, since I needed to initiate an array variable:

#Initialize $range as an array variable:
$range = @()

#Populate $range with integers from 2 though 254 
#  (for a Class C ipv4 subnet):
for ($i=2; $i -le 254; $i++) {$range += $i}

#Write out IP addresses for systems that do not have registered DNS 
#  names in the IP subnet
$range | % {$ip = "192.168.1." + $_ ; $out = & nslookup $ip 2>&1; `
  if ($out -match 'Non-existent domain') {write-host $ip}}

Now some variations… write out only addresses with no DNS entry and that do not respond to ping. (This will help to weed out addresses that are in use that for whatever reason to not have a DNS name.):

$range | % {$ip = "132.198.102." + $_ ; $out = & nslookup $ip 2>&1; `
  if ($out -match 'Non-existent domain') {$out2 = & ping $ip -n 1; `
  if ($out2 -match 'Destination host unreachable') {write-host $ip}}}

… and perhaps most usefully, write out addresses with no DNS entry and that cannot be located using ARP. (This will weed out in use addresses with no DNS and a firewall that blocks ICMP packets.):

$range | % {$ip = "132.198.102." + $_ ; $out = & nslookup $ip 2>&1; `
  if ($out -match 'Non-existent domain') {& ping $ip -n 1 > $null; `
  $out2 = arp -a $ip; if ($out2 -match 'No ARP Entries') {write-host $ip}}}

We are hiring!

We need an Exchange admin!  If you know Exchange, and you would like to work with us here on “Team Awesome” at UVM, read on…


The University of Vermont is seeking a Senior Windows and Exchange Systems Administrator/Engineer to help support UVM’s central server infrastructure, focusing primarily on Microsoft technologies, including Exchange, Sharepoint, and Lync. UVM is deploying a new Exchange environment and migrating all students, faculty, and staff to this system.

In this position, you will get to design and build your ideal Exchange environment for 20,000+ people.

The successful applicant will help design, deploy, and support this exciting new service for the campus. We are looking for someone with the expertise and creativity to help improve IT at UVM; someone who can design and build reliable and secure systems to solve complex problems.

The University of Vermont is located in beautiful Burlington, Vermont, recognized as one of the best places to live in the US.

Expertise with both on-premise and Office 365 services is valuable. In addition, this position will help support Active Directory, and related services such as ADFS. Security management and monitoring are also key functions. The successful applicant will provide advanced technical expertise and expert level troubleshooting of Microsoft collaboration services. The applicant will need to configure, install, maintain, and monitor enterprise server equipment and software; initiate and manage projects in collaboration with other systems administrators; train other technical and operational staff; perform system tuning and troubleshooting; identify and implement security enhancements; perform capacity planning, business continuity and disaster recovery services; and provide systems documentation.

UVM’s central Systems Architecture & Administration department works on the latest in server and storage technology across multiple datacenters. Our systems support most aspects of server computing at UVM, including research, on-line learning, and administrative functions. Our highly technical and energetic team works collaboratively to improve IT at UVM, positively affecting thousands of students, faculty, and staff.

Scripting experience in a Windows environment is required (PowerShell at minimum). This position will help build interfaces for the new Exchange environment to other IT systems on campus, such as our website, Blackboard, Banner, and Luminis.

We offer competitive compensation for exceptional people. UVM has a strong benefits package, including tuition remission benefits.

For more information, and to apply, please visit:

Logon Performance in VDI Land

After spending hte better part of three days attempting to shave time off of login times in our VDI environment (VMware View-based), I thought I should scribe down some notes on effective troubleshooting tools and techniques. There were a lot of self-inflicted wounds this time, and I could have saved myself a lot of time if 1) I had documented the build process for my new VDI pool and 2) I had taken notes that last time I had made login optimizations.

WARNING: This post is largely unedited and probably a bit incoherent. Read at your own risk.

Group Policy:

Computer Configuration->Policies->Administrative Templates->System: Display highly detailed status messages
This setting causes the Windows login screen to provide more verbose feedback to the user about what winlogon.exe is doing at any given time. Rather than just seeing “Preparing Windows”, you will instead see things like “Processing Drive Map Preferences”. If the logon screen hangs on one of these steps for 30 seconds, you will know exactly which Group Policy setting is killing logon performance.

Event Viewer:

Windows 7 and 8 both include a Group Policy operational log under: Event Viewer->Applications and Services Logs->Microsoft->Windows->GroupPolicy->Operational.  This log contains a lot of useful information about the timing of various group policy components, and many times will contain all of the information you need to pinpoint troublesome Group Policy Settings.

If the Event Viewer does not have all of the information you need, you can enable verbose policy logging:

I typically find that this is not necessary, and that the Event Viewer has the information that I need.

Problems with User Profile loading often can be found under: Event Viewer->Applications and Services Logs->Microsoft->Windows->User Profile Service->Operational log. This log is especially useful when using roaming or mandatory profiles. Unfortunately, this entry just tracks initial profile location and loading, and does not log anything related to Active Setup.

Windows Performance Toolkit:

Part of the Assessment and Deployment Toolkit (ADK) for Windows 8.1. The Performance Toolkit includes the Windows Performance Recorder and Windows Performance Analyzer. Run the Recorder with the “Boot” performance scenario, with 1 iteration, then use the Analyzer to read the trace file that was created during reboot and logon. Make note of the relative time of each event in the boot/logon process (i.e. time of boot, time of login, time to desktop load). The Recorder only logs relative time from boot up, so you might have some trouble correlating wall-clock time with recorded event times. Try to locate processes that line up with the delays you see during login.

As an alternative, you can enable boot logging using “ProcMon”. The Performance Analyzer arguably offers better visualizations of boot issues, but ProcMon has more comprehensive process information, and may be a more familiar tool for many administrators.

Active Setup:

Active Setup is a pain. This is a pooly documented mechanism by which applications (mostly Microsoft applications) can run per-user configuration tasks (generally first-run tasks) on logon. It is synchronous, meaning each task much be completed before the next runs. Also, Active Setup runs in Winlogon.exe and blocks loading of the desktop. Because of this, Active Setup has the potential to greatly delay first time logon. As a result, it also becomes a scapegoat for logon delays, even when it is not the root cause. I have no really helpful advice for troubleshooting Active Setup other than use use the Performance Analyzer or ProcMon to locate Active Setup processes that take a long time to execute. See the following for a better explanation of the internals of Active Setup:
And this for an explanation of situations in which you might want to disable Active Setup:

SysInternals AutoRuns:

You can wade though every obscure registry key looking for processes that run at login, or you can just use AutoRuns and pull them up all in one place. Thanks to AutoRuns, I was able to locate the entry point for an irksome logon process that was running for no apparent reason. I had forgotten that under Windows Vista and later, Scheduled Tasks can use user logon events as a trigger event for starting a process. This brings us to the process that killed two days of my life…

Minor Troubles with Google Chrome:

Using The Performance Analyzer, I concluded initially that Google Chrome was adding over 30 seconds of time to logon on one of my VDI pools.  While Google is launching “GoogleUpdate.exe” at each user logon event (via a scheduled task trigger), these scheduled tasks really should not block loading of the desktop.  This task runs in other pools, without significant delay.  In this pool, the task was running for a long time (over a minute) before exiting.  The likely cause of this excessive delay is the internet-bound HTTP/HTTPS filtering that is taking place in this pool… Google cannot update itself if outbound internet access is blocked.  Still, long running or not, Chrome Update was not blocking loading of the desktop.

That being said, our users really do not need Chrome to check for updates on each and every logon, so how to we fix this?
Investigation of Active Setup showed that Active Setup for Chrome already had been completed in our Mandatory roaming profile. So why was Chrome setup running on each and every user logon? It also was configured as a Scheduled Task that runs on each user logon event. Aargh! As noted above, SysInternals AutoRuns was used to locate this entry point.

Unfortunately, Google Update is a bit on the complicated side:

There are two separate Google Update system services, two separate Scheduled Tasks related to Google Update, and three separate task triggers, including the one that runs a logon. For now, I have just disabled the scheduled tasks in my template machine. Unfortunately, this completely disabled Google Update in the VDI pool. Also, the changes will be wiped out if we update Google manually or via SCCM in the future. Better would be a Group Policy-based solution.

Some of you may know that there actually is official registry/Group Policy support for control of Google Update. See:
However, these setting just disable Auto Update entirely. They do not allow you to control how and when updates will apply (i.e. disable user-mode updates, but leave machine-mode updates intact.

I expect the “real” fix here would be to run a separate scheduled task script or startup script that used PowerShell to fund and remove the scheduled task triggers. That’s more time than I want to spend on this project at present.

Rejecting read receipt requests with Procmail

I preparation for my exit, I have been re-routing various routing bulk mail messages using “Procmail” recipes. While I was working on this, I got an email from a colleague who always requests read receipts for every message that he sends. Despite being asked to stop, messages continue to come in with read receipt requests. I thought “wouldn’t it be great if I could get procmail to just reject these messages outright?”

Consulting common references on Procmail was not helpful because they rely procmail having access to a full shell environment. My colleague Jim Lawson gave me the framework for a different solution which instead involves the use of the “:0fwc” construct to pipeline multiple procmail actions. Interesting is the use of the “appendmsg” command, for which I cannot find a reference anywhere. This work, though. Aggressive/Aggressive handling of read receipt requests achieved!

#Use ":0c:" below if you want to receive a copy of the original message instead of just rejecting it.
# Check to see if the message contains any of the following command read-receipt request headers:
* ^Disposition-Notification-To:|\
# Prevent mail loops... mail loops are bad.
* ! ^X-Loop: jgm@uvm\.edu
        | formail -pQUOTE: -k -r
        BODY=`formail -I ""`

        | formail -A"X-Loop:"\
        -I"Subject: Rejected mail: Read Receipts not accepted by this account."
        #-I"To: ${REJECT}" \
        # scrape off the body
        | formail -X ""

        | appendmsg "Message rejected because it contains a read receipt request." 

        # put back the quoted body
        | appendmsg $BODY
        | sendmail -t

Migrating Windows-auth users to Claims users in SharePoint

A short time back I published an article on upgrading a Windows-authenticated based SharePoint environment to an ADFS/Shibboleth-based claims-based environment. At that time I said I would post the script that I plan to use for the production migration when it was done. Well… here it is.

This script is based heavily on the one found here:‎
Unfortunately, “SharePoint-Voodoo” appears to be down at the time of this writing, so I cannot make appropriate attribution to the original author. This script helped speed along this process for me… Thanks, anonymous SharePoint Guru!

My version of the script adds the following:

  • Adds stronger typing to prevent script errors.
  • Adds path checking for the generated CSV file (so that the script does not exit abruptly after running for 30 minutes).
  • Introduces options to specify different provider prefixes for windows group and user objects.
  • Introduces an option to add a UPN suffix to the new user identity
  • Collects all user input before doing any processing to speed along the process.
  • Adds several “-Limit All” parameters to the “Get-SP*” cmdlets to prevent omission of users from the migration process.

There are still some minor problems. When run in “convert” mode, the script generates an error for every migrated user, even when the user is migrated successfully. I expect this is owing to a bug in “Move-SPUser”, and there probably is not much to be done about it. Because I want to migrate some accounts from windows-auth to claims-with-windows-auth, there is some manual massaging of the output file that needs to be done before running the actual migration, but I think this is about as close as I can get to perfecting a generic migration script without making the end-product entirely site-specific.

I will need to run the script at least twice… once for my primary “CAMPUS” domain, and once to capture “GUEST” domain users. I may also want to do a pass to convert admin and service account entries to claims-with-windows-auth users.

Set-PSDebug -Strict
add-pssnapin microsoft.sharepoint.powershell -erroraction 0

# Select Options
Write-Host -ForegroundColor Yellow "'Document' will create a CSV dump of users to convert. 'Convert' will use the data in the CSV to perform the migrations."
Write-Host -ForegroundColor Cyan "1. Document"
Write-Host -ForegroundColor Cyan "2. Convert"
Write-Host -ForegroundColor Cyan " "
[int]$Choice = Read-Host "Select an option 1-2: "

    1 {[bool]$convert = $false}
    2 {[bool]$convert = $true}
    default {Write-Host "Invalid selection! Exiting... "; exit}
Write-Host ""

$objCSV = @()
[string]$csvPath = Read-Host "Please enter the path to save the .csv file to. (Ex. C:\migration)"
if ((Test-Path -LiteralPath $csvPath) -eq $false) {
	Write-Host "Invalid path specified! Exiting..."; exit

if($convert-eq $true)
	$objCSV = Import-CSV "$csvPath\MigrateUsers.csv"

    foreach ($object in $objCSV)
        $user = Get-SPUser -identity $object.OldLogin -web $object.SiteCollection 
        write-host "Moving user:" $user "to:" $object.NewLogin "in site:" $object.SiteCollection 
        move-spuser -identity $user -newalias $object.NewLogin -ignoresid -Confirm:$false
	[string]$oldprovider = Read-Host "Enter the Old Provider Name (Example -> Domain\ or i:0#.f|MembershipProvider|) "
    [string]$newprovider = Read-Host "Enter the New User Provider Name (Example -> Domain\ or i:0e.t|MembershipProvider|) "
	[string]$newsuffix = Read-Host "Enter the UPN suffix for the new provider, if desired (Example -> "
	[string]$newGroupProvider = Read-Host "Enter the New Group Provider Name (Example -> Domain\ or c:0-.t|MembershipProvider|\) "

    # Select Options
    Write-Host -ForegroundColor Yellow "Choose the scope of the migration - Farm, Web App, or Site Collection"
    Write-Host -ForegroundColor Cyan "1. Entire Farm"
    Write-Host -ForegroundColor Cyan "2. Web Application"
    Write-Host -ForegroundColor Cyan "3. Site Collection"
    Write-Host -ForegroundColor Cyan " "
    [int]$scopeChoice = Read-Host "Select an option 1-3: "

        1 {[string]$scope = "Farm"}
        2 {[string]$scope = "WebApp"}
        3 {[string]$scope = "SiteColl"}
        default {Write-Host "Invalid selection! Exiting... "; exit}
    Write-Host ""
    if($scope -eq "Farm")
        $sites = @()
        $sites = get-spsite -Limit All
    elseif($scope -eq "WebApp")
        $url = Read-Host "Enter the Url of the Web Application: "
        $sites = @()
        $sites = get-spsite -WebApplication $url -Limit All
    elseif($scope -eq "SiteColl")
        $url = Read-Host "Enter the Url of the Site Collection: "
        $sites = @()
        $sites = get-spsite $url

    foreach($site in $sites)
		$webs = @() #needed to prevent the next foreach from attempting to loop a non-array variable
        $webs = $site.AllWebs

        foreach($web in $webs)
            # Get all of the users in a site
			$users = @()
            $users = get-spuser -web $web -Limit All #added "-limit" since some webs may have large user lists.

            # Loop through each of the users in the site
            foreach($user in $users)
                # Create an array that will be used to split the user name from the domain/membership provider
                $displayname = $user.DisplayName
                $userlogin = $user.UserLogin

                if(($userlogin -like "$oldprovider*") -and ($objCSV.OldLogin -notcontains $userlogin))
                    # Separate the user name from the domain/membership provider
                        $a = $userlogin.split("|")
                        $username = $a[1]

                            $a = $username.split("\")
                            $username = $a[1]
                        $a = $userlogin.split("\")
                        $username = $a[1]
                    # Create the new username based on the given input
					if ($user.IsDomainGroup) {
						[string]$newalias = $newGroupProvider + $username
					} else {
						[string]$newalias = $newprovider + $username + $newsuffix

                    $objUser = "" | select OldLogin,NewLogin,SiteCollection
	                $objUser.OldLogin = $userLogin
                    $objUser.NewLogin = $newAlias
	                $objUser.SiteCollection = $site.Url

	                $objCSV += $objUser

    $objCSV | Export-Csv "$csvPath\MigrateUsers.csv" -NoTypeInformation -Force

Integrating SharePoint 2013 with ADFS and Shibboleth… The Motion Picture

Time again to attempt to implement that exciting technology, Federation Services (Web Single Sign On, SAML, WS-Federation, or whatever you want to call it) with SharePoint. Last time we tried this, SharePoint 2010 was a baby product, MS was just testing the waters with SAML 2.0 support in ADFS 2.0, and Shibboleth 2 was pretty new stuff here at UVM. The whole experience was unsatisfying. SharePoint STS configuration was full of arcane PowerShell commands, ADFS setup was complicated by poor farm setup documentation, and interop of Shibboleth 2 with ADFS 2 was not documented at all.  After wading though all of that mess, we ended up with user names being displayed as “i:05.t|adfsServiceName|userPrincipalName” (bleagh!), and with many applications that could not deal with Web SSO authenticators.  My general conclusion was that SharePoint really was not ready for federated login.

Four years later, things have changed a bit.  STS configuration still requires dense PowerShell commands, but at least it is better documented.  ADFS and Shibboleth interoperability also are excellently documented at this point.  Microsoft has most Office apps working with passive SAML authenticators, and has pledged to get the rest working this year.  While I would not judge the use of ADFS (with or without Shibboleth) to be the easy route to take to SharePoint 2013 deployment, it at least looks functional at this point.  So let’s kick the tires and see how it works…

Part 1: ADFS Setup

First, we need to setup ADFS.  We chose to deploy “ADFS 3” using Windows Server 2012 R2 as the OS platform.  ADFS 3 is required to support the new “workplace join” feature of Server 2012 R2.  Since we want to test this, we would need ADFS 3 anyway.  Unfortunately, ADFS on Server 2012 R2 is pretty virgin territory, and does not have the same troubleshooting resources available for it as do earlier releases.

Most of the configuration steps followed the TechNet documentation without variation.  We did find that we needed to modify the ACL on the service account used to run ADFS… we added the “Authenticated Users” principal to the ACL, and assigned the “Read all properties” right.

For us, the most complicating factor in ADFS 3 deployment is the replacement of IIS with the Windows Kernel-mode HTTP server “http.sys”.  When we started experiencing connectivity problems with various clients to ADFS, we had no experience with HTTP.sys to assist in troubleshooting.  Most articles on HTTP.sys relate to remote desktop services, and with Server 2003.  Our problems with HTTP.sys were rooted in an undocumented requirement for clients to submit SNI (Service Name Indicator) information in their TLS “CLIENT HELLO” sequence.  I had to open a support case with Microsoft to resolve this problem, and only afterword was I able to find any Internet discussions that reflected MS advice:

It seems that if http.sys is bound using the hostname:port format, then TLS will require SNI.  If the binding is instead specified using ipAddr:port, SNI will not be required.  To fix our problem, we just needed to add a second HTTPS certificate binding using an IP address.  In this case, we just used “”.  Here is the procedure:

  1. On each ADFS server and proxy, open an elevated command prompt
  2. run: netsh http show sslcert
  3. Record the certificate hash and application ID for the certificate used by ADFS
  4. run: netsh http add sslcert ipport= certhash= appid={}

Part 2: Configuring SharePoint to use ADFS

I started with Microsoft’s guide to configuring SAML authentication for SharePoint 2013 using ADFS:
This is a well written guide, and fairly easy to follow.  The only issue I take with the article is the recommendation to use the “emailAddress” claim as the “identifier claim” in SharePoint.  In many federated login scenarios, a foreign Idp may not want to release the email address attribute.  However, some variation of UPN likely will be released.  In the case of the InCommon federation, the “ePPN” value (eduPersonPrincipalName) generally is available to federation partners.  For this reason I chose to implement “UPN” as the “identifier claim” in the last command of phase 3 of the document:

$ap = New-SPTrustedIdentityTokenIssuer -Name  -Description  -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $emailClaimMap,$upnClaimMap,$roleClaimMap,$sidClaimMap -SignInUrl $signInURL -IdentifierClaim $roleClaimmap.InputClaimType

Part 3: Migrating SharePoint Users From Windows to Claims

Since we are planning an upgrade, not a new deployment, we need migrate existing Windows account references in SharePoint to federated account references. To make this happen, we need to establish the federated account identity format. I simply log in to an open-access SharePoint site as an ADFS user, and record the account information. In this case, out accounts look like this:


and groups:


We then can use PowerShell to find all account entries in SharePoint, and use the “Move-SPUser” PowerShell cmdlet to convert them.  I am still working on a final migration script for the production cutover, and I will try to post it here when it is ready.

Of some concern is keeping AD group permissions functional.  It turns out that SharePoint will respect AD group permissions for ADFS principals, but there are a few requirements:

  1. The incoming login token needs to contain the claim type ““,
    and that this claim type needs to contain the SamAccountID (or CN) of the AD
    Group that was granted access to a site.  (In ADFS, this means that you need to release the AD LDAP Attribute “Token-Groups – Unqualified Names” as the outgoing claim type “Role”.
  2. When adding AD Groups as permissions in SharePoint, we need to use the “samAccountName” LDAP attribute as the identifier claim.  The LDAPCP (see Part 4) utility makes this easy as it will do this for us automatically when configured to search AD.

Requirement 1 could be a problem when using Shibboleth as the authentication provider.  Our Shibboleth deployment does not authenticate against AD, so a Shibboleth ticket will not contain AD LDAP “tokenGroup” data in the “role” claim.  I am working with the Shibboleth guys to see if there is any way to augment Shibboleth tokens with data pulled from AD.

Part 4:  The SharePoint PeoplePicker and ADFS

Experienced SharePoint users all know (and mostly love) the “people picker” that searches Active Directory to validate user and group names that are to be added to the access list for a SharePoint site.  One of the core problems with federation services is that they are authentication systems only.  ADFS and Shibboleth do not implement a directory service.  You cannot do a lookup on an ADFS principal that a user adds to a SharePoint site.  This is particularly irksome, since all of our ADFS users actually have matching accounts in Active Directory.

Fortunately, there is a solution… you can add a “Custom Claims Provider” into SharePoint which will augment incoming ADFS claims with matching user data pulled from Active Directory.  This provider also integrates with the PeoplePicker to allow querying of AD to validate Claims users that are being added to a SharePoint site.  A good write-up can be found here:

“But I don’t want to compile a SharePoint solution using Visual Studio,” I hear you (and me) whine.  No problem… there is a very good pre-build solution available from CodePlex:

Normally I do not like using third-part add-ons in SharePoint.  I will make an exception for LDAPCP because:

  1. It works.
  2. It saves me hours of Visual Studio work.
  3. It is a very popular project and thus likely to survive on CodePlex until it is no longer needed.
  4. If the project dies, we can implement our own Claims Provider using templates provided elsewhere with (hopefully) minimal fuss.

My only outstanding problem with LDAPCP is that it will not query principals in our Guest AD forest.  However, there are some suggestions from the developer along these lines:

To summarize, the developer recommends compiling our own version of LDAPCP from the provided source code.   We would use the method “SetLDAPConnections” found in the “LDAPCP_Custom.cs” source file to add an additional LDAP query source to the solution.  I will try this as time permits.

Part 5: Transforming Shibboleth tokens to ADFS

So far we have not strayed too far from well-trodden paths on the Internet.  Now we get to the fun part… configuring ADFS as a relying party to our Shibboleth Idp, then transforming the incoming Shibboleth SAML token into an ADFS token that can be consumed by SharePoint.

Microsoft published a rather useful guide on ADFS/Shibboleth/InCommon integration:
Using this guide we were able to set up ADFS as a relying party to our existing Shibboleth Idp with minimal fuss.  Since we already have an Idp, we skipped most of Step 1, and then jumped to Step 4 as we did not need to configure Shibboleth as an SP to ADFS.

I had the local “Shibboleth Guy” add our ADFS server to the relying parties configuration file on the Shib server, and release “uvm-common” attributes for this provider. This allows SharePoint/ADFS users to get their “eduPersonPrincipalName” (ePPN) released to ADFS/SharePoint from Shibboleth. However, SharePoint (and ADFS) do not natively understand this attribute, so we configure a “Claim Rule” on the “Claims Provider Trust” with Shibboleth.  The rule is an “Acceptance Transformation Rule” that we title “Transform ePPN to UPN”, and it has the following syntax:

c:[Type == "urn:oid:"]
=> issue(Type = "",
Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value,
ValueType = c.ValueType);

The “urn:oid:” bit is the SAML 2 identifier for the ePPN type.

After configuring a basic Relying Party trust with SharePoint 2013, I need to configure Claims Rules that will release Shibboleth User attributes/claims to SharePoint.  You could use a simple “passthough” rule for this.  However, I want incoming Shibboleth tokens that have a “” UPN suffix to be treated as though they are Active Directory users.  To accomplish this, I need to do a claims transformation. In AD, the user UPN has the “” suffix, so let’s transform the Shibboleth UPN using a Claim Rule on the SharePoint Relying Party Trust:

c:[Type == "urn:oid:", Value =~ "@uvm\.edu$"]
=> issue(Type = "", Issuer = "AD AUTHORITY", OriginalIssuer = c.OriginalIssuer, Value = regexreplace(c.Value, "^(?[^@]+)@(.+)$", "${user}"), ValueType = c.ValueType);

NOTE: Code munged by WordPress. Contact me if you need exact syntax!

This appears to work… the first RegEx “@uvm\.edu$” should match an incoming UPN that ends with “”. In the second set of regExps, we create a capture group for the user portion of the UPN (which is everything from the start of the value up to (but not including) the “@” character), and place the captured data into a variable called “user”. We then replace everything trailing the user portion with “”.

However, as noted above, the incoming Shibboleth SAML token does not contain AD group data in the “Role” attribute, so users authenticating from Shibboleth cannot get access to sites where they have been granted access using AD groups.  Not good! Fortunately, there is a solution.

An only-hinted at, and certainly not well documented capability of the Claim Rule Language is the ability to create an issuable token with claims originating from more than one identity store. In our case, we need to supplement the incoming Shibboleth SAML token with “roles” claims obtained from Active Directory. I do this during the release of the ADFS token to the SharePoint relying party, using the following rule:

c:[Type == "", Value =~ "^$"]
=> issue(store = "Active Directory", types = (""), query = ";tokenGroups;CAMPUS\{0}", param = regexreplace(c.Value, "^(?.+)$", "${user}"));

NOTE: code munged by WordPress. Contact me if you need exact syntax!

The issuance part of this rule bears some discussion. We use the “query” string to collect tokenGroups (or recursive group memberships). The query is run against the Active Directory ADFS attribute store (hence store = “Active Directory”). The query must take the format:
In our case, there is no filter specified. According to TechNet, the default filter is “samAccountName={0}”, which really is what we want. We want to query a particular samAccountName for its tokenGroups. The DOMAIN\Username portion needs to be present, but the username portion is ignored. We could have used “jimmyJoeJimBob” instead of “{0}”.

The {0} represents the value collected in the “param” section of the issuance rule. I used a regular expression replacement rule to strip out the samAccountName from the UPN. Of course, I could simply have used the UPN in the query filter, which would have been cleaner. Maybe I will update this for a future project.

I had to solicit help from the Windows Higher Education list to figure this one out. There is limited documentation on the “query” portion of this rule available here. It is the only documentation I can find, and it does document the bizarre LDAP/AD query syntax discussed above:

…And some more general docs on the Claim Rule Language here:

…And an interesting use-case breakdown of the Language here:

…And a video from Microsoft’s “Identity Architects” here:

Part 6:  Where are you from?  Notes on Home Realm Discovery

When an ADFS server has multiple “Claims Provider Trusts” defined, the ADFS login page automatically will create a “WAYF”, or “Where are you from?” page to allow the user to select from multiple authentication providers.  In our case, the user would see “Active Directory” and “UVM Shibboleth”.  Since I would not want to confuse people with unnecessary choices, we can disable the display of one of these choices using PowerShell:

Set-AdfsRelyingPartyTrust -TargetName "SharePoint 2013" -ClaimsProviderName "UVM Shibboleth"

In this sample, “SharePoint 2013” is the name of the relying party defined in ADFS for which you want to set WAYF options.  “UVM Shibboleth” is the Claims Provider Trust that you want used.  This value can be provided as an array, but in this case we are going to provide only one value… the one authenticator that we want to use.  After configuring this change, ADFS logins coming from SharePoint get sent straight to Shibboleth for authentication.

Part 7: The Exciting Results

Only a sysadmin could call this exciting…

Given how heavily MS invested in implementing WS-Federation and WS-Trust into their products (MS Office support for federation services was, to the best of my knowledge, focused entirely on the WS-* protocols implemented in ADFS 1.0), I was not expecting any client beyond a web browser to work with Shibboleth.  Imagine my surprise…


IE 11 and Chrome both login using Shib with no problems.  Firefox works, but not without a glitch… upon being redirected back to SharePoint from our webauth page, we get a page full of un-interpreted html code.  Pressing “f5” to refresh clears the problem.

Office 2013 Clients:

All core Office 2013 applications appear to support opening of SharePoint documents from links in the browser.  Interestingly, it appears that Office is able to share ADFS tokens obtained by Internet Explorer, and vice versa.  The ADFS token outlives the browser session, too, so you actually have to log off of ADFS prevent token re-use.  I tested the “Export to Excel” and “Add to Outlook” options in the SharePoint ribbon, and both worked without a fuss.

Getting Office apps to open content in SharePoint directly also works, although its a bit buggy.  Sometimes our webauth login dialog does not clear cleanly after authentication.

SkyDrive Pro (the desktop version included with Office 2013) (soon to be “OneDrive for Business) also works with Shibboleth login, amazingly.  The app-store version does not work with on-premises solutions at all, so I could not test it.

Mobile Clients:

I was able to access a OneNote Notebook that I stored in SharePoint using OneNote for Android.  However, it was not easy.  OneNote for Android does not have a dialog that allow for the adding of notebooks from arbitrary network locations.  I first had to add the notebook from a copy of OneNote 2013 for Windows that was linked to my Microsoft account.  The MS account then recorded the existence of the notebook.  When I logged in to OneNote on the Android, it picked up on the SharePoint-backed notebook and I was able to connect.

The OneNote “metro” app does not appear to have the same capability as the Android app.  I cannot get it to connect to anything other than Office 365 or CIFS-backed files.

I was unable to test Office for iOS or Android because I do not own a device on which those apps are supported.

I still need to look at the “Office Document Connection” that comes with Office for the Mac, and at WebDAV clients, and perhaps some other third-party SharePoint apps to see if they work.

Miracast – the technology with 100 names

Some of us in ETS have been experimenting with wireless display technologies in the hopes of finding solutions that will work for those of us who don’t have an Apple client devices and an Apple TV.   (To those of you with Macs, we are indeed jealous.  AirPlay truly dominates the wireless display market at this time).

Much noise has been made of late concerning a technology called “Miracast”.  This is an open standard for wireless display.  It is built into the new Windows 8.1 OS, and Android 4.2 and later.  When you can get a functional receiver (we tested the NetGear Push2TV with some success) it is a pretty slick technology.  However, issues getting the required drivers and firmware in place can be quite frustrating and lead to failed deployments.  If you are working on a wireless display deployment, we would love to share notes with you to see if we can reach some configuration recommendations for the rest of campus.

In the interest of information sharing, here are some factoids that we discovered:

Miracast = The “Wi-Fi Alliance” marketing name for the rather boringly named “Wi-Fi Display” standard.

Miracast = Sony “screen mirroring” = Panasonic “display mirroring” =  Google “Wireless Display” = Samsung “AllShare Cast” = LG “SmartShare” = Intel “WiDi 3.5”
(e.g. Implementers of the Wi-Fi Display protocol don’t have to call it Miracast.  The question is, why would they want to call it something else?)

Miracast Intel WiDi versions prior to 3.5. AirPlay DIAL (ChromeCast) DLNA

(e.g. Intel WiDi has been replaced with a Miracast implementation, but maintains the name WiDi when deployed with support for back-level WiDi implementations.  Also, don’t confuse Miracast with ChromeCast (a DIAL implementation), AirPlay (an Apple technology), or DLNA (a media streaming solution.)


App-V Server Configuration, Load Balanced Configuration

In my last post I discussed issues around Kerberos configuration for an App-V 5 server cluster in a load balanced configuration.  Today I will discuss subsequent configuration requirements for making App-V publishing function in a load-balanced environment.

After configuring standalone app-V servers with Management and Publishing Server roles, I had good success with adding packages to the environment and publishing them.  However, when switching to a load balanced configuration, I experienced a failure of the publishing server to pick up on changes in the management configuration.  Helpful resources and troubleshooting notes follow:

  • A TechNet social page that I referenced in my previous post makes reference to this same problem:
    But does not point me towards any solutions.  This seems like some sort of permissions problem, so I put Sysinternals Procmon on watching w3wp.exe for “Access Denied” events, but I get nothing.  However, I do see a fair amount of database traffic at IIS startup time.
  • The following TechNet blog provided a key tipoff in App-V server diagnostics:
      The trick was to select “Show Analytic and Debug Logs” under “Actions” in the event viewer.  With this option enabled, I now see App-V management and publishing debug logs instead of just the default App-V event logs.  The debug logs contain the real error.  We see that the SID recorded for the publishing server in the management server database does not match the SID of the account making the connection!  What we needed to do was delete the publishing server entries from the management configuration, and create one new “server” under the name of the publishing server service account, not the computer account.  I just updated the SQL database entry manually, but I likely could have just used the Silverlight UI instead.  This change cleared up the mismatched SID error, but now we get an “access denied” error to the publishing metadata directory.
  • The following blog gives an excellent technical overview of App-V server infrastructure and the general troubleshooting process for resolving configuration issues:

    • Here it is suggested that I look at HKLM/Software/Microsoft/AppV/Server to review the management and publishing server configurations.  Sure enough, one problem seen here is that the publishing server is configured to connect to the management server on an http:// address.  However, I updated the management servers to use https://.  I modified those registry values and restarted IIS.  Still no luck… 
    • This blog explains how published applications are read out of a metadata xml file that is exposed to the publishing server by the management server.  Both are stored in c:\programdata\microsoft\appv.  When running Procmon.exe against w3wp.exe we see “Access Denied” to these directories by our service account.  After adding “modify” rights for the service account to these directories, metadata updates again start to happen.

It is unsurprising that switching form the use of a local server account to an AD service account caused access problems for the App-V server.  The difficulty of discovering where account info and rights needed to be updated was a bit of a surprise.  But thanks to the blog-o-sphere and the mighty “procmon.exe”, we have our answers.

Now on to performance testing…