Archive for the ‘Uncategorized’ Category

Integrating SharePoint 2013 with ADFS and Shibboleth… The Motion Picture

Time again to attempt to implement that exciting technology, Federation Services (Web Single Sign On, SAML, WS-Federation, or whatever you want to call it) with SharePoint. Last time we tried this, SharePoint 2010 was a baby product, MS was just testing the waters with SAML 2.0 support in ADFS 2.0, and Shibboleth 2 was pretty new stuff here at UVM. The whole experience was unsatisfying. SharePoint STS configuration was full of arcane PowerShell commands, ADFS setup was complicated by poor farm setup documentation, and interop of Shibboleth 2 with ADFS 2 was not documented at all.  After wading though all of that mess, we ended up with user names being displayed as “i:05.t|adfsServiceName|userPrincipalName” (bleagh!), and with many applications that could not deal with Web SSO authenticators.  My general conclusion was that SharePoint really was not ready for federated login.

Four years later, things have changed a bit.  STS configuration still requires dense PowerShell commands, but at least it is better documented.  ADFS and Shibboleth interoperability also are excellently documented at this point.  Microsoft has most Office apps working with passive SAML authenticators, and has pledged to get the rest working this year.  While I would not judge the use of ADFS (with or without Shibboleth) to be the easy route to take to SharePoint 2013 deployment, it at least looks functional at this point.  So let’s kick the tires and see how it works…

Part 1: ADFS Setup

First, we need to setup ADFS.  We chose to deploy “ADFS 3″ using Windows Server 2012 R2 as the OS platform.  ADFS 3 is required to support the new “workplace join” feature of Server 2012 R2.  Since we want to test this, we would need ADFS 3 anyway.  Unfortunately, ADFS on Server 2012 R2 is pretty virgin territory, and does not have the same troubleshooting resources available for it as do earlier releases.

Most of the configuration steps followed the TechNet documentation without variation.  We did find that we needed to modify the ACL on the service account used to run ADFS… we added the “Authenticated Users” principal to the ACL, and assigned the “Read all properties” right.

For us, the most complicating factor in ADFS 3 deployment is the replacement of IIS with the Windows Kernel-mode HTTP server “http.sys”.  When we started experiencing connectivity problems with various clients to ADFS, we had no experience with HTTP.sys to assist in troubleshooting.  Most articles on HTTP.sys relate to remote desktop services, and with Server 2003.  Our problems with HTTP.sys were rooted in an undocumented requirement for clients to submit SNI (Service Name Indicator) information in their TLS “CLIENT HELLO” sequence.  I had to open a support case with Microsoft to resolve this problem, and only afterword was I able to find any Internet discussions that reflected MS advice:
http://social.msdn.microsoft.com/Forums/vstudio/en-US/d514b5a0-c01c-4ce4-b589-bca890e5921d/how-to-properly-setup-lb-probe-for-adfs-30-servers?forum=Geneva

It seems that if http.sys is bound using the hostname:port format, then TLS will require SNI.  If the binding is instead specified using ipAddr:port, SNI will not be required.  To fix our problem, we just needed to add a second HTTPS certificate binding using an IP address.  In this case, we just used “0.0.0.0:443″.  Here is the procedure:

  1. On each ADFS server and proxy, open an elevated command prompt
  2. run: netsh http show sslcert
  3. Record the certificate hash and application ID for the certificate used by ADFS
  4. run: netsh http add sslcert ipport=0.0.0.0:443 certhash= appid={}

Part 2: Configuring SharePoint to use ADFS

I started with Microsoft’s guide to configuring SAML authentication for SharePoint 2013 using ADFS:
http://technet.microsoft.com/en-us/library/hh305235(v=office.15).aspx
This is a well written guide, and fairly easy to follow.  The only issue I take with the article is the recommendation to use the “emailAddress” claim as the “identifier claim” in SharePoint.  In many federated login scenarios, a foreign Idp may not want to release the email address attribute.  However, some variation of UPN likely will be released.  In the case of the InCommon federation, the “ePPN” value (eduPersonPrincipalName) generally is available to federation partners.  For this reason I chose to implement “UPN” as the “identifier claim” in the last command of phase 3 of the document:

$ap = New-SPTrustedIdentityTokenIssuer -Name  -Description  -realm $realm -ImportTrustCertificate $cert -ClaimsMappings $emailClaimMap,$upnClaimMap,$roleClaimMap,$sidClaimMap -SignInUrl $signInURL -IdentifierClaim $roleClaimmap.InputClaimType

Part 3: Migrating SharePoint Users From Windows to Claims

Since we are planning an upgrade, not a new deployment, we need migrate existing Windows account references in SharePoint to federated account references. To make this happen, we need to establish the federated account identity format. I simply log in to an open-access SharePoint site as an ADFS user, and record the account information. In this case, out accounts look like this:

i:0e.t|adfs.uvm.edu|user@campus.ad.uvm.edu

and groups:

c:0-.t|adfs.uvm.edu|samAccountName

We then can use PowerShell to find all account entries in SharePoint, and use the “Move-SPUser” PowerShell cmdlet to convert them.  I am still working on a final migration script for the production cutover, and I will try to post it here when it is ready.

Of some concern is keeping AD group permissions functional.  It turns out that SharePoint will respect AD group permissions for ADFS principals, but there are a few requirements:

  1. The incoming login token needs to contain the claim type “http://schemas.microsoft.com/ws/2008/06/identity/claims/role“,
    and that this claim type needs to contain the SamAccountID (or CN) of the AD
    Group that was granted access to a site.  (In ADFS, this means that you need to release the AD LDAP Attribute “Token-Groups – Unqualified Names” as the outgoing claim type “Role”.
  2. When adding AD Groups as permissions in SharePoint, we need to use the “samAccountName” LDAP attribute as the identifier claim.  The LDAPCP (see Part 4) utility makes this easy as it will do this for us automatically when configured to search AD.

Requirement 1 could be a problem when using Shibboleth as the authentication provider.  Our Shibboleth deployment does not authenticate against AD, so a Shibboleth ticket will not contain AD LDAP “tokenGroup” data in the “role” claim.  I am working with the Shibboleth guys to see if there is any way to augment Shibboleth tokens with data pulled from AD.

Part 4:  The SharePoint PeoplePicker and ADFS

Experienced SharePoint users all know (and mostly love) the “people picker” that searches Active Directory to validate user and group names that are to be added to the access list for a SharePoint site.  One of the core problems with federation services is that they are authentication systems only.  ADFS and Shibboleth do not implement a directory service.  You cannot do a lookup on an ADFS principal that a user adds to a SharePoint site.  This is particularly irksome, since all of our ADFS users actually have matching accounts in Active Directory.

Fortunately, there is a solution… you can add a “Custom Claims Provider” into SharePoint which will augment incoming ADFS claims with matching user data pulled from Active Directory.  This provider also integrates with the PeoplePicker to allow querying of AD to validate Claims users that are being added to a SharePoint site.  A good write-up can be found here:
http://blogs.msdn.com/b/kaevans/archive/2013/05/26/fixing-people-picker-for-saml-claims-users-using-ldap.aspx

“But I don’t want to compile a SharePoint solution using Visual Studio,” I hear you (and me) whine.  No problem… there is a very good pre-build solution available from CodePlex:
http://ldapcp.codeplex.com/discussions/361947

Normally I do not like using third-part add-ons in SharePoint.  I will make an exception for LDAPCP because:

  1. It works.
  2. It saves me hours of Visual Studio work.
  3. It is a very popular project and thus likely to survive on CodePlex until it is no longer needed.
  4. If the project dies, we can implement our own Claims Provider using templates provided elsewhere with (hopefully) minimal fuss.

My only outstanding problem with LDAPCP is that it will not query principals in our Guest AD forest.  However, there are some suggestions from the developer along these lines:
http://ldapcp.codeplex.com/discussions/478092

To summarize, the developer recommends compiling our own version of LDAPCP from the provided source code.   We would use the method “SetLDAPConnections” found in the “LDAPCP_Custom.cs” source file to add an additional LDAP query source to the solution.  I will try this as time permits.

Part 5: Transforming Shibboleth tokens to ADFS

So far we have not strayed too far from well-trodden paths on the Internet.  Now we get to the fun part… configuring ADFS as a relying party to our Shibboleth Idp, then transforming the incoming Shibboleth SAML token into an ADFS token that can be consumed by SharePoint.

Microsoft published a rather useful guide on ADFS/Shibboleth/InCommon integration:
http://technet.microsoft.com/en-us/library/gg317734(WS.10).aspx
Using this guide we were able to set up ADFS as a relying party to our existing Shibboleth Idp with minimal fuss.  Since we already have an Idp, we skipped most of Step 1, and then jumped to Step 4 as we did not need to configure Shibboleth as an SP to ADFS.

I had the local “Shibboleth Guy” add our ADFS server to the relying parties configuration file on the Shib server, and release “uvm-common” attributes for this provider. This allows SharePoint/ADFS users to get their “eduPersonPrincipalName” (ePPN) released to ADFS/SharePoint from Shibboleth. However, SharePoint (and ADFS) do not natively understand this attribute, so we configure a “Claim Rule” on the “Claims Provider Trust” with Shibboleth.  The rule is an “Acceptance Transformation Rule” that we title “Transform ePPN to UPN”, and it has the following syntax:

c:[Type == "urn:oid:1.3.6.1.4.1.5923.1.1.1.6"] 
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", 
Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, 
ValueType = c.ValueType);

The “urn:oid:1.3.6.1.4.1.5923.1.1.1.6″ bit is the SAML 2 identifier for the ePPN type.

After configuring a basic Relying Party trust with SharePoint 2013, I need to configure Claims Rules that will release Shibboleth User attributes/claims to SharePoint.  You could use a simple “passthough” rule for this.  However, I want incoming Shibboleth tokens that have a “@uvm.edu” UPN suffix to be treated as though they are Active Directory users.  To accomplish this, I need to do a claims transformation. In AD, the user UPN has the “@campus.ad.uvm.edu” suffix, so let’s transform the Shibboleth UPN using a Claim Rule on the SharePoint Relying Party Trust:

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", 
Value =~ "@uvm\.edu$"]
=> issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn", 
Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, 
Value = regexreplace(c.Value, "^(?[^@]+)@(.+)$", "${user}@campus.ad.uvm.edu"), 
ValueType = c.ValueType);

This appears to work… the first RegEx “@uvm\.edu$” should match an incoming UPN that ends with “@uvm.edu”. In the second set of regExps, we create a capture group for the user portion of the UPN (which is everything from the start of the value up to (but not including) the “@” character), and then replace everything trailing the user portion with “@campus.ad.uvm.edu”.

However, as noted above, the incoming Shibboleth SAML token does not contain AD group data in the “Role” attribute, so users authenticating from Shibboleth cannot get access to sites where they have been granted access using AD groups.  Boo!

Part 6:  Where are you from?  Notes on Home Realm Discovery

When an ADFS server has multiple “Claims Provider Trusts” defined, the ADFS login page automatically will create a “WAYF”, or “Where are you from?” page to allow the user to select from multiple authentication providers.  In our case, the user would see “Active Directory” and “UVM Shibboleth”.  Since I would not want to confuse people with unnecessary choices, we can disable the display of one of these choices using PowerShell:

Set-AdfsRelyingPartyTrust -TargetName "SharePoint 2013" -ClaimsProviderName "UVM Shibboleth"

In this sample, “SharePoint 2013″ is the name of the relying party defined in ADFS for which you want to set WAYF options.  “UVM Shibboleth” is the Claims Provider Trust that you want used.  This value can be provided as an array, but in this case we are going to provide only one value… the one authenticator that we want to use.  After configuring this change, ADFS logins coming from SharePoint get sent straight to Shibboleth for authentication.

Part 7: The Exciting Results

Only a sysadmin could call this exciting…

Given how heavily MS invested in implementing WS-Federation and WS-Trust into their products (MS Office support for federation services was, to the best of my knowledge, focused entirely on the WS-* protocols implemented in ADFS 1.0), I was not expecting any client beyond a web browser to work with Shibboleth.  Imagine my surprise…

Browsers:

IE 11 and Chrome both login using Shib with no problems.  Firefox works, but not without a glitch… upon being redirected back to SharePoint from our webauth page, we get a page full of un-interpreted html code.  Pressing “f5″ to refresh clears the problem.

Office 2013 Clients:

All core Office 2013 applications appear to support opening of SharePoint documents from links in the browser.  Interestingly, it appears that Office is able to share ADFS tokens obtained by Internet Explorer, and vice versa.  The ADFS token outlives the browser session, too, so you actually have to log off of ADFS prevent token re-use.  I tested the “Export to Excel” and “Add to Outlook” options in the SharePoint ribbon, and both worked without a fuss.

Getting Office apps to open content in SharePoint directly also works, although its a bit buggy.  Sometimes our webauth login dialog does not clear cleanly after authentication.

SkyDrive Pro (the desktop version included with Office 2013) (soon to be “OneDrive for Business) also works with Shibboleth login, amazingly.  The app-store version does not work with on-premises solutions at all, so I could not test it.

Mobile Clients:

I was able to access a OneNote Notebook that I stored in SharePoint using OneNote for Android.  However, it was not easy.  OneNote for Android does not have a dialog that allow for the adding of notebooks from arbitrary network locations.  I first had to add the notebook from a copy of OneNote 2013 for Windows that was linked to my Microsoft account.  The MS account then recorded the existence of the notebook.  When I logged in to OneNote on the Android, it picked up on the SharePoint-backed notebook and I was able to connect.

The OneNote “metro” app does not appear to have the same capability as the Android app.  I cannot get it to connect to anything other than Office 365 or CIFS-backed files.

I was unable to test Office for iOS or Android because I do not own a device on which those apps are supported.

I still need to look at the “Office Document Connection” that comes with Office for the Mac, and at WebDAV clients, and perhaps some other third-party SharePoint apps to see if they work.

Miracast – the technology with 100 names

Some of us in ETS have been experimenting with wireless display technologies in the hopes of finding solutions that will work for those of us who don’t have an Apple client devices and an Apple TV.   (To those of you with Macs, we are indeed jealous.  AirPlay truly dominates the wireless display market at this time).

Much noise has been made of late concerning a technology called “Miracast”.  This is an open standard for wireless display.  It is built into the new Windows 8.1 OS, and Android 4.2 and later.  When you can get a functional receiver (we tested the NetGear Push2TV with some success) it is a pretty slick technology.  However, issues getting the required drivers and firmware in place can be quite frustrating and lead to failed deployments.  If you are working on a wireless display deployment, we would love to share notes with you to see if we can reach some configuration recommendations for the rest of campus.

In the interest of information sharing, here are some factoids that we discovered:

Miracast = The “Wi-Fi Alliance” marketing name for the rather boringly named “Wi-Fi Display” standard.

Miracast = Sony “screen mirroring” = Panasonic “display mirroring” =  Google “Wireless Display” = Samsung “AllShare Cast” = LG “SmartShare” = Intel “WiDi 3.5″
(e.g. Implementers of the Wi-Fi Display protocol don’t have to call it Miracast.  The question is, why would they want to call it something else?)

Miracast Intel WiDi versions prior to 3.5. AirPlay DIAL (ChromeCast) DLNA

(e.g. Intel WiDi has been replaced with a Miracast implementation, but maintains the name WiDi when deployed with support for back-level WiDi implementations.  Also, don’t confuse Miracast with ChromeCast (a DIAL implementation), AirPlay (an Apple technology), or DLNA (a media streaming solution.)

References:
http://en.wikipedia.org/wiki/Miracast
http://www.theverge.com/2013/9/25/4753264/is-the-airplay-killer-already-dead

App-V Server Configuration, Load Balanced Configuration

In my last post I discussed issues around Kerberos configuration for an App-V 5 server cluster in a load balanced configuration.  Today I will discuss subsequent configuration requirements for making App-V publishing function in a load-balanced environment.

After configuring standalone app-V servers with Management and Publishing Server roles, I had good success with adding packages to the environment and publishing them.  However, when switching to a load balanced configuration, I experienced a failure of the publishing server to pick up on changes in the management configuration.  Helpful resources and troubleshooting notes follow:

  • A TechNet social page that I referenced in my previous post makes reference to this same problem:
    http://social.technet.microsoft.com/Forums/en-US/2b39e2b8-aba1-4e96-b18f-c5bcb9f12687/load-balancing-two-appv-50-servers-the-publishing-service-is-not-able-to-contact-the-management
    But does not point me towards any solutions.  This seems like some sort of permissions problem, so I put Sysinternals Procmon on watching w3wp.exe for “Access Denied” events, but I get nothing.  However, I do see a fair amount of database traffic at IIS startup time.
  • The following TechNet blog provided a key tipoff in App-V server diagnostics:
    http://blogs.technet.com/b/appv/archive/2013/01/23/how-to-collect-app-v-5-0-debug-traces.aspx
      The trick was to select “Show Analytic and Debug Logs” under “Actions” in the event viewer.  With this option enabled, I now see App-V management and publishing debug logs instead of just the default App-V event logs.  The debug logs contain the real error.  We see that the SID recorded for the publishing server in the management server database does not match the SID of the account making the connection!  What we needed to do was delete the publishing server entries from the management configuration, and create one new “server” under the name of the publishing server service account, not the computer account.  I just updated the SQL database entry manually, but I likely could have just used the Silverlight UI instead.  This change cleared up the mismatched SID error, but now we get an “access denied” error to the publishing metadata directory.
  • The following blog gives an excellent technical overview of App-V server infrastructure and the general troubleshooting process for resolving configuration issues:
    http://kirxblog.wordpress.com/2012/05/11/app-v-5-beta-basic-troubleshooting/

    • Here it is suggested that I look at HKLM/Software/Microsoft/AppV/Server to review the management and publishing server configurations.  Sure enough, one problem seen here is that the publishing server is configured to connect to the management server on an http:// address.  However, I updated the management servers to use https://.  I modified those registry values and restarted IIS.  Still no luck… 
    • This blog explains how published applications are read out of a metadata xml file that is exposed to the publishing server by the management server.  Both are stored in c:\programdata\microsoft\appv.  When running Procmon.exe against w3wp.exe we see “Access Denied” to these directories by our service account.  After adding “modify” rights for the service account to these directories, metadata updates again start to happen.

It is unsurprising that switching form the use of a local server account to an AD service account caused access problems for the App-V server.  The difficulty of discovering where account info and rights needed to be updated was a bit of a surprise.  But thanks to the blog-o-sphere and the mighty “procmon.exe”, we have our answers.

Now on to performance testing…

App-V 5 Server, F5 Load Balancers, and Kerberos

More fun today with Kerberos and load balancers.  Today’s challenge related to getting the Microsoft App-V publishing server to work with an F5 load balancer in a Layer 4/n-Path/DSR configuration.  Everything was working when I was accessing the individual server nodes, but when I switched to using the load balanced name and address, authentication started to fail.

After lots of log searching I eventually tried a wire trace, and found the following Kerberos error in the response from the App-V server to the App-V client:
KRB5KRB_AP_ERR_MODIFIED

Lots of different resources helped here:

  • This TechNet page explains various Kerberos errors and why they might occur:
    http://blogs.technet.com/b/askds/archive/2008/06/11/kerberos-authentication-problems-service-principal-name-spn-issues-part-3.aspx

    Of note is the scenario where the account handling the authentication request does not hold the SPN for which the request was made.  I set the SPN for my IIS application pool identity, but further analysis of the error packet shows that it was handled by my App-V server machine account, not the service account.  Augh!  Why?

  • This thread on TechNet Social was the biggest help:

    http://social.technet.microsoft.com/Forums/en-US/2b39e2b8-aba1-4e96-b18f-c5bcb9f12687/load-balancing-two-appv-50-servers-the-publishing-service-is-not-able-to-contact-the-management

    The user posted all of the steps they followed in configuring IIS and the service account SPN, including the tidbit:
    changed the authentication of the “Management Service” web site to useAppPoolCredentials=”true”
    I have never used this particular setting, so I dug into it…

  • The following MSDN article explains the IIS 7.0 feature of “kernel authentication”, how it affects the need for SPN entries, and its interplay with application pool identity accounts:
    http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx

    Basically, with kernel-mode authentication, the SYSTEM account will handle all Kerberos authentication by default.  This explains why we were seeing Kerberos errors in the communications with the App-V client… the IIS pool identity account was not handling Kerberos delegation!

    Of special interest is this statement:
    Either:
    Disable Kernel mode authentication and follow the general steps for Kerberos as in the previous IIS 6.0 version.
    Or,
    [Recommended for Performance reasons]
    Let Kernel mode authentication be enabled and the Application pool’s identity be used for Kerberos ticket decryption. The only thing you need to do here is:
    1. Run the Application pool under a common custom domain account.
    2. Add this attribute “useAppPoolCredentials” in the ApplicationHost.config file.

  • This TechNet page documents how to configure Kerberos auth in IIS, and mentions the use of the IIS appcmd.exe to set the “useAppPoolCredentials” option:
    http://technet.microsoft.com/en-us/library/dd759186.aspx
    Included is the exact command line required to set the value to true:
     appcmd.exe set config -section:system.webServer/security/authentication/windowsAuthentication -useAppPoolCredentials:true
    (But the page does not really tell you what it is for, which is where the MSDN article comes in handy.)

So, Kerberos under IIS 7 and later has some nuances not present in IIS 6.  I wonder how I did not encounter this before?

The case of the undeletable directory

I had a “fun” time today with some directories that I could not delete.  These were mandatory roaming profiles that I previously had attempted to upload to a file server using “robocopy”.  Owing to switches that I used when performing the upload, directory junctions were treated as files, and robocopy got caught in a recursive loop because of a circular reference to “Application Data”.  By the time I caught the problem, I had created a set of nested “Application Data” directories that must have been over 400 characters long.

All of the tricks I had used in the past to delete “deep” directories were failing me.  “Dir /s /q [dirName]” failed with “Access Denied”, even though I was the directory owner.  Running the command as System encountered the same problem.  I mapped a drive to the directory, as deep down in the nested folders as I could get.  From there, I was able to “CD” to the deepest “Application Data” directory, but I still could not delete the directory.  (I got a “directory not empty” error.)

Eventually, Google unveiled a suggestion to use “robocopy” to mirror an empty directory to the problematic directory.  (Unfortunately, I have lost track of the link, so I cannot give credit where it is due.)  “Good idea,” I thought.  Why not use the utility that created the problem to solve the problem?  After all, if robocopy can create paths that are 400 characters long, perhaps it can delete them, too.

To test, I simply created an empty directory “E:\foo”, and ran the command:
robocopy.exe /mir /e E:\foo E:\mandatory\corruptProfile

Robocopy quickly chewed though the problematic profile, and a few seconds later I had an empty directory that I was able to delete.  Hurray!

VDI Profile Loading Delays

We are noticing that it takes rather a long time for users to log in to our VDI environment (~2 minutes, in some circumstances).  I did some analysis of login times using Sysinternals Procmon.  (Enable boot logging, use the “view process tree” feature to look at process times at logon.  See http://blogs.technet.com/b/markrussinovich/archive/2012/07/02/3506849.aspx for details).  What I found was that a child process of explorer.exe called “ie4uinit.exe” was running for most of this time.  This process appears to be part of Microsoft “Active Setup” (discussed in some detail here: http://blog.ressoftware.com/index.php/2011/12/29/disable-active-setup-revealed/).

So what if we disable Active Setup?  Noise on the Internet suggests that this is possible , simply be deleting the key:
HKEY_LOCAL_MACHINE\Software\Microsoft\Active Setup
as suggested here:
http://communities.vmware.com/thread/292229?start=0&tstart=0

However, there is some indication that this could have unintended consequences.  In my case, it immediately caused a logon script to fail to run.  Bummer!

What other solutions are possible?  Members of the Windows in Higher Education mailing list recently recommended using mandatory profiles.  There is a reasonably good rundown of the mandatory profile creation process here:
http://markswinkels.nl/2009/12/how-to-create-a-mandatory-profile-in-windows-server-2008-r2/
Missing details are:

  1. It is possible for the mandatory roaming profile to be stored locally (i.e. “C:\Users\VDI_Mandatory.V2″ to avoid over-the-network profile copy delays.  However, in our View environment, using a network location appears to be faster!
  2. The mandatory roaming profile can be specified using the Group Policy settings in Computer -> Policies -> Administrative Templates -> System -> User Profiles.  (See “Set roaming profile path for all users logging onto this computer” and “Delete cached copies of roaming profile”.)

In testing, I found initial logon times were reduced from two minutes to approximately 20 seconds.  Good!  (But still not great.)  Additional benefits are that it is no longer necessary to run the logon script that I developed to customize the Start Menu and Task Bar.  I also can remove the Group Policy preferences that clean up local profiles on the computer.

MBAM Configuration Nuances

This week we are continuing testing of the new Microsoft Bitlocker Administration and Management 2.0 tool (MBAM).

MBAM is not overly complicated, but it does have several service tiers and dependencies which make initial setup a bit irksome. After plowing though configuration of a SQL database, SQL Reporting Services, and IIS, we are still need to configured MBAM Group Policy settings, and then we needed to do a fir number of tweaks to make the service actually work. Here are the most significant deviations from the official documentation:

  1. The Group Policy templates for MBAM are not uploaded to the AD Policy Store during product installation, nor does the documentation recommend that you complete this step. However, if you want to be able to edit MBAM Policy from any workstation in the domain, you really do need to upload the ADMX templates. Making this happen is easy… just use the MBAM installer to install the MBAM policy templates locally, then open c:\windows\PolicyDefinitions, and copy BitLockerManagement.admx and BitLockerUserManagement.admx to \\[domain]\SYSVOL\[domain]\Policies\PolicyDefinitions (you will need domain admin rights to do this. Also copy the corresponding .adml files in the local language directory of your local PolicyDefinitions directory to the local language directory on the domain controller (in my case, these are in the “en-US” subdirectory).
  2. After installing the MBAM Client and policy settings, clients were failing to auto-initiate encryption, and were failing to report status to the management server.  The MBAM Admin Event Logs were showing the following error:
    Log Name: Microsoft-Windows-MBAM/Admin
    Source: Microsoft-Windows-MBAM
    Event ID: 4
    Task Category: None
    Level: Error
    User: SYSTEM
    Computer: machinename.domainname.com
    Description: An error occurred while sending encryption status data.
    Error code: 0x803d0013

    This is occurring for a few reasons.  One, the MBAM server is not trusted for delegation, so it cannot perform Kerberos authentication in IIS.  Two, the public URL for MBAM services (https://bitlocker.uvm.edu) does not match the internal name of the server (BAM1).  To fix this, we needed perform a few additional configuration steps:

    1. Create the following key and value on the MBAM management server:
      HKEY_LOCAL_MACHINE\Software\Microsoft\MBAM
      DWORD(32-bit) - DisableMachineVerification
      Value = 1
    2. On the MBAM Administration Server AD object, enable the “Trust for delegation for any service (Kerberos Only) option”, under the Delegation tab.
    3. Use the “setspn” utility to add additional principal names for the public URL of the server to the AD server account:
      setspn -A HOST/bitlocker.mydomain.com MYDOMAIN\MyServer$
      setspn -A HTTP/bitlocker.mydomain.com MYDOMAIN\MyServer$
      setspn -A RestrictedKrbHost/bitlocker.mydomain.com MYDOMAIN\MyServer$
      (Note that if using a service account to run the MBAM Administration Service, you should use “setspn” to set the HOST/HTTP names for the service account instead of the domain computer account).
    4. It appears that it may also be necessary to add the “BackConnectionHostNames” Reg_multi_Sz value to “HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0″ to include any public dns names used by the MBAM Administration Server (this likely only is necessary in a load balanced configuration).

    We then needed to perform an IISRESET on the management server, and cycle the MBAM Clients.

  3. The MBAM Help Desk web application was failing to display reports.  This was happening because the installer grabbed the unencrypted reporting services URL from the reporting services instance.  I had to open:
    C:\inetpub\Microsoft BitLocker Management Solution\Help Desk Website\web.config
    Then locate the tag, and edit the URL value to the SSL version of the Reporting Services web site.
  4. The MBAM documentation claims that you will use MBAM policies in place of standard Windows BitLocker policies.  This is somewhat misleading… Many MBAM policy settings also will change the “classic” BitLocker policy settings, so it will appear that you have configured both classic and MBAM policies in the editor.  This would not really be a problem were it not for the fact that MBAM policies are not comprehensive.  You may need to return to the “classic” settings to configure appropriate behavior in your environment.  For example, we experienced difficulty in encrypting a Dell Latitude 10 tablet using MBAM.  On this machine, we saw the following error in the MBAM Admin Event Log:
    Event ID: 2
    An error occurred while applying MBAM policies.
    Volume ID:\\?\Volume{VolumeGUID}\
    Error Code: 0x803100B6
    Details:  No pre-boot keyboard or Windows recovery Environment detected.  The user may not be able to provide the required input to unlock the volume.

    This error is happening because our policy is set to “Allow PIN” (BitLocker PIN Authenticator is allowed, but not required).  Apparently, MBAM default-fails the attempted encryption, even though this is not a “fatal” error.  To allow encryption to continue, I needed to set the classic policy “Enable use of BitLocker authentication requiring preboot keyboard input on slates” as defined here:
    http://technet.microsoft.com/en-us/library/jj679890.aspx#BKMK_slates
    With this policy in place, encryption completes successfully on the tablet computer.

Other than these caveats, the tool does appear to be working.  Setting up our PGP Universal Server was easier, but suffering though the pain of ongoing PGP disk encryption support was agonizing.  Hopefully a little time spent on configuring a solid BitLocker support environment will bear lasting fruit for our constituents down the road.

Additional Resources:

Rick Delserone’s MBAM: Real World Information – A rundown on MBAM Certificate Configuration, Group Policy Templates, and undocumented registry settings:
http://www.css-security.com/blog/mbam-real-world-information/

SQL Server 2012, Transparent Data Encryption, and Availability Groups

We are looking into using Microsoft Bitlocker Administration and Monitoring (MBAM) 2.0 to manage BitLocker in our environment. One requirement for MBAM is a SQL Server database instance that supports Transparent Database Encryption (TDE).  (Update 2013-06-04:  Microsoft now claims that TDE is “optional” with MBAM 2.0, which is nice to know.  If only they had told me this before I went to the trouble of setting up SQL 2012 Enterprise just for this project!)  Currently we also are in the process of investigating the creation of a consolidation SQL 2012 Enterprise Edition “Always On” Availability Group. I wanted to see if I could create the MBAM Recovery Database in a SQL 2012 Availability Group. This proved slightly tricky… fortunately I was able to find a decent reference here:

https://www.simple-talk.com/sql/database-administration/encrypting-your-sql-server-2012-alwayson-availability-databases/

The trick is, you need to create SQL Certificates that on each member server of the Availability Group that have the same name and are generated from the same private key. The procedure follows…

On the first server in the group, create a SQL Master Key and Certificate by running the following code. The script will create a backup file in your SQL Server data directory. Move this file to an archival location. If you lose the file and password, you will not be able to recover encrypted databases in a disaster event:

USE MASTER
GO

-- Create a Master Key
CREATE MASTER KEY ENCRYPTION BY Password = 'Password1';

-- Backup the Master Key
BACKUP MASTER KEY
   TO FILE = 'Server_MasterKey'
   ENCRYPTION BY Password = 'Password2';

-- Create Certificate Protected by Master Key
CREATE Certificate SQLCertTDEMaster
   WITH Subject = 'Certificate to protect TDE key';

-- Backup the Certificate
BACKUP Certificate SQLCertTDEMaster
   TO FILE = 'SQLCertTDEMaster_cer'
   WITH Private KEY (
       FILE = 'SQLCertTDEMaster_key',
       ENCRYPTION BY Password = 'Password3'
   );

Now create a master key on any secondary servers in the availability group, and create the same cert by using the backup file from the first step, above. You will need to copy the certificate backup files to the local server data directory, or use a network share that is accessible to the account running the script:

-- Create a Master Key
CREATE MASTER KEY ENCRYPTION BY Password = 'Password1';
-- Backup the Master Key
BACKUP MASTER KEY
   TO FILE = 'Server_MasterKey'
   ENCRYPTION BY Password = 'Password2';

-- Create Certificate Protected by Master Key
CREATE Certificate SQLCertTDEMaster
   FROM FILE = 'SQLCertTDEMaster_cer'
   WITH Private KEY (
       FILE = 'SQLCertTDEMaster_key',
       Decryption BY Password = 'Password3'
   );

To avoid needless trouble, create your new database and add it to your availability group before encrypting the database. Once the database is created, you can initiate encryption by opening SQl Management Studio, right-clicking your database, select tasks, then select “Manage Database Encryption”. Select the option to generate the database encryption key using a server certificate. Select the certificate created above, and select the option to “set database encryption on”.

Once the database is encrypted, be sure to test availability group failover to make sure the secondary servers are able to work with the encrypted database.

Dell XPS 12 – The Windows 8 Flagship?

Regular readers of my blog (all two of you) may recall the “series” I started this fall on Windows 8 launch devices (concerning the HP Envy X2 and the Samsung SmartPC Pro 700t). These devices both had strengths, but failed in other ways that made them difficult or impossible to support in an enterprise environment. This month, I got my hands on a device that breaks though that barrier and satisfies in a big way. The new Dell XPS 12 finally arrived on our campus about two weeks ago. We immediately were taken with its light weight (3 lbs.), sleek styling, and novel materials (full carbon fiber base, carbon fiber and aluminum lid, and that unique flip-over touch screen). The 8-second boot time is another impressive feature. A longer battery life would have been appreciated, but I can live with it. Other helpful enhancements would be the inclusion of an active stylus. I also would appreciate slightly more resistance in the keyboard.

Others have weighed in on the appearance, performance, and usability of this fancy Ultrabook, though, so I will forgo further commentary on those aspects of the XPS 12. What most concerned us was the ability to support OS redeployment, BitLocker encryption, and hardware servicing on our Campus.

We unboxed and re-deployed the computer with Windows 8 Enterprise within one day. There were a few deployment hiccoughs, but in general re-deployment was what we have come to expect from Dell. All required drivers for the XPS 12 were made available in a single downloadable CAB file. We extracted this CAB to our MDT/LiteTouch Deployment Share, rebuilt our boot media, and initiated a LiteTouch deployment. There was a brief problem getting LiteTouch to start… we needed to disable the “Safe Boot” option in EFI/BIOS, and we needed to set the EFI boot mode to “Legacy” to allow our boot media to operate. Once those changes were made, the XPS 12 booted to our USB WinPE media without complaint. Upon completion of deployment, all devices in the device manager reported as functioning. There were no “poorly-behaved” drivers that required un-scripted installation. We did find that the track-pad was behaving strangely. Investigation revealed that the PnP process had grabbed a Windows 7 track-pad driver from our deployment share. We corrected this manually, then separated our Windows 8 drivers from our Windows 7 drivers in the Deployment Workbench… this should prevent the problem from recurring in future deployments.

BitLocker was easy to implement. The TPM chip readily was recognized by the OS, and TPM-with-PIN encryption was accomplished in minutes. I spent half a day trying to encrypt an older Dell Latitude E6500 a few months back. This was a breeze by comparison.

On the servicing front, we have good news. Dell now is allowing on-site servicing for all XPS models, with full reimbursement for parts and labor for qualified technicians. Physical serviceability is a big concern for newer Ultrabooks. A troubling trend in tablet and notebook design is the use of solder on drive mounts and glue to hold batteries in place (the latest “Retina” MacBooks and the MS Surface tablets suffer from these problems). Fortunately, it appears that all major components of the XPS 12 can be removed and replaced without the need to re-solder or remove glue. The most frequently swapped components such as the battery, mSATA drive, and memory chips look pretty easy to access. The keyboard is a bit of a pain to get to, but at least it can be serviced.

If only more Windows 8 launch products had been this good… I hope we see more products of this quality coming from Dell (and other vendors) in the near future.

Update:  2013-11-1

Five months into using the XPS 12, I started to have trouble with the trackpad.  It would not click anymore!  Since we are working with an evaluation unit, I do not have warranty coverage, so I figured I had no warranty to void by attempting to repair it on my own.

Some digging in the Dell support site revealed that the so-called XPS 12 “User Manual” is actually a service manual!  The readily available PDF document illustrates step-by step how to remove the carbon fiber base plate and the battery in order to get to the track pad.  (The only challenging part was locating a #5 Torx screwdriver to take off the base plate.)  Within 15 minutes I had removed the click pad, and cleaned the trapped grit out from under it.  (Within a half hour I had the unit re-assembled.  In another 15 minutes I had taken the base plate back off, reconnected the battery power connector, and re-attached the base plate, again.)  The unit powered back on as normal, with the track pad working like new.

At a time when consumer devices are moving towards non-serviceable designs (think MacBook Retina), it is nice to see a device that is thin and light while still maintaining serviceability.  Perhaps the track pad on the MacBook Retina is less prone to trapping grit, but imagine if it did?  With all the components glued together, you might be out $2000 because of a bit of sand.  I really have to hand it to Dell.  These XPS Ultrabooks are really nicely engineered.

 

VMware View – Implementing Idle User Auto-Logout

We are going live with out first public VMware View terminals this week (Wyse P25 “zero-clients”… nice units).  I had what I expected would be a easy list of “little jobs” to be completed before going live. Famous last words…

One item on the list was implementing an “idle user logout” process.  This process would detect when a View session had gone idle, and would disconnect the session automatically (preferably after prompting the user).  This disconnected session then would be logged out by View Manager after a fixed amount of time.

This proved rather more difficult than I had predicted.  I tried several solutions before arriving at one that worked.  Among the failed solutions:

  • Using Group Policy to configure Remote Desktop Session Manager idle session limits.  The View configuration documents imply that this should work, but it does not.  I expect that the policies would be effective if you were connecting to your View desktops using RDP, but PCoIP sessions just will not disconnect automatically (at least, they would not for me).
  • Using the Windows Task Scheduler to configure a disconnect script that will trigger on idle.  This did not work for two reasons.  First, the Task Scheduler only evaluates for idle conditions every 15 minutes.  Second, for the Task Scheduler, “idle” means not only that the user is not directing mouse and keyboard to the computer, but that the CPU also is not doing anything.  As a result, we could not get consistent auto-logout times.

The solution that we settled on involved the use of a custom screensaver developed by the “Grim Admin”.  “ScreenSaver Operations”:

http://www.grimadmin.com/staticpages/index.php/ss-operations

This is a great little utility that accomplishes what the “WinExit” screensaver used to for Win XP.  (WinExit cannot easily be used on Win7, and is a bit hostile to 64-bit Windows).  Screensaver Operations has a well-written README describing the use of registry entries to control the screensaver globally (i.e. for all users on the computer).  I set these registry operations as Group Policy Preferences, and we are in business.

Two slight complications… since the screensaver is 32-bit, you need to use the “sysnative” filesystem redirector if your want the screensaver to trigger 64-bit executables.  In our case, I wanted the screensaver to launch “tsdiscon.exe” (to disconnect the View session), so I had to use the path:
%windir%\sysnative\tsdiscon.exe
Additionally, you will need to specify the full path to the screensaver in the Group Policy dialogs (i.e. %SystemRoot%\SysWOW64\Screensaver Operations.scr).  If you fail to do so, the screensaver will appear to be configured in the Control Panel, and you will be able to preview it by clicking the “preview” button, but the screensaver WILL NEVER START.

Ashamedly I will admit that this little challenge too much longer to accomplish than it should have.  No wonder lab managers burn out so easily.