# Remote Desktop Gateway Service – register NPS

I struggled with getting a new Server 2016 Remote Desktop Gateway Service running. I followed the official documentation from Microsoft, configuring two servers as a farm, and creating a single CAP and RAP identically on each server. But every time I tried to connect, I received an error message from the client that my account:

I love those error messages that say “Contact your network administrator for assistance.”

I found a corresponding entry in the Microsoft-Windows-TerminalServices-Gateway/Operational log with the following text:

The user “CAMPUS\[username]”, on client computer “132.198.xxx.yyy”, did not meet connection authorization policy requirements and was therefore not authorized to access the RD Gateway server. The authentication method used was: “NTLM” and connection protocol used: “HTTP”. The following error occurred: “23003”.

I double-checked the groups I had added to the CAP and verified the account I was using should be authorized. I even removed everything and inserted “Domain Users”, which still failed.
I found different entries that also corresponded to each failure in the System log from the Network Policy Service (NPS) with Event ID 4402 claiming:

“There is no domain controller available for domain CAMPUS.”

I know the server has a valid connection to a domain controller (it logged me into the admin console). But I double-checked using NLTEST /SC_QUERY:CAMPUS. Yup; all good.
A few more Bingoogle searches and I found a forum post about this NPS failure. The marked solution just points to a description of the Event ID, but one of the comments contains the solution: the Network Policy Service on the gateway systems needs to be registered. This instruction is not part of the official documentation, though upon re-reading that doc, I now see that someone has mentioned this step in the comments.
In this case, registration simply means adding the computer objects to the RAS and IAS Servers AD group (requires Domain Admin privs). Once I made this change, I was able to successfully connect to a server using the new remote desktop gateway service.
Many thanks to TechNet forum user Herman Bonnie for posting the very helpful comment.

# Windows 10 Wi-Fi – No Internet

SOS!! 22 hours with no wifi!!!!

In the past 48 hours, two different family members in different households have reported problems with their Windows 10 laptops’ Wi-Fi connections. Some basic troubleshooting — restarting the modem/router, verifying other devices could connect — demonstrated that the issue was with the laptops.
The laptop was connected to the Wi-Fi access point, with full signal strength, but there was no connectivity beyond that connection.

In the first troubleshooting effort, we did the standard things:

1. Reboot. Of course.
4. Running the Network Troubleshooter (didn’t fix things)

The Network Troubleshooter didn’t resolve anything, but it did mention something useful. It reported that the “Wi-Fi” adapter had an invalid configuration.

At this point, I turned to Google, and found a couple of sites suggesting using netsh to reset the IP configuration. We ran the following commands from an elevated command prompt (run as administrator, or it won’t work):

1. netsh interface IPv4 reset
2. ipconfig /flushdns

Then we rebooted, and the system came up and connected to Wi-Fi and the Internet was available again.
Subsequently, I found this Microsoft support article entitled Fix network connection issues in Windows 10, which covers may of the steps we tried as well as the steps that resolved our issues.
In Windows 10, if you run Netsh interactively, you see a notification that Netsh is deprecated, and to transition to the admittedly awesome PowerShell modules for managing TCP/IP. However, giving the specific behavior of the netsh interface ipv4 reset command (overwrites registry information; see the More Information section of https://support.microsoft.com/en-us/kb/299357), I’m not sure what PowerShell command would accomplish the same end. Something to look into.

Microsoft and other providers have published add-ins that provide additional functionality within Outlook and Outlook for web. We have enabled two add-ins which you may find useful, the Message Header Analyzer and the Unsubscribe Add-on.

Click the check-box in the Turned on column to make one or both add-ins available in Outlook:

Once this step is complete, the add-ins you have turned on should appear in the message window in your Outlook mail clients for Windows, Mac, and the web. It may take a little while (or maybe a restart of Outlook) before they appear in the Windows and Mac versions.

Outlook add-ins as they appear in Outlook for the Web.

Outlook add-ins as they appear in Outlook for Windows.

The Message Header Analyzer provides a convenient way to view detailed information (metadata) about an email message, including the message routing information.

The Message Header Analyzer in Outlook for Windows.

The Unsubscribe add-in appears when viewing bulk marketing messages, and depending on the content of the message, may unsubscribe your address from the a marketing list or may suggest simply blocking mail from that sender.

The Unsubscribe add-in within Outlook for Windows, suggesting that we block mail from this sender.

We hope that you will find these add-ins useful. Please let us know what you think.

# Scheduled tasks, PowerShell's -file parameter, and array values

I wrote a script that accepts a comma-separated list of values, and the script worked just fine from the command-line. However, when I tried to configure a scheduled task to run the script, it always failed.
Why? Well, I started a cmd.exe session and then launched the script in the same way that the scheduled task did, using PowerShell’s -file parameter. And when I did that, the error message that I emit from the script showed me that the list was being parsed as a single string argument.
To confirm and experiment, I wrote a short little test script:


[cmdletbinding()]
param(
[Parameter(Mandatory=$True,ValueFromPipeline=$True)]
[string[]]
$Spells ) process { foreach ($spell in $spells ) { "Casting$spell"
}
}


When run from within a PowerShell session, it works as expected:

PS C:\> .\Cast-WizardSpell.ps1 -SpellList 'Ray of Frost','Light','Detect Magic'
Casting Ray of Frost
Casting Light
Casting Detect Magic


When invoked using the PowerShell -file parameter, the comma-separated list is parsed as a single parameter (note: cmd.exe doesn’t like single quotes):

C:\>powershell -file .\Cast-WizardSpell.ps1 -SpellList "Ray of Frost","Light","Detect Magic"
Casting Ray of Frost,Light,Detect Magic
# Trying explicit array syntax, but no luck
C:\>powershell -file .\Cast-WizardSpell.ps1 -SpellList @("Ray of Frost","Light","Detect Magic")
Casting @(Ray of Frost,Light,Detect Magic)


What does work is to use the old-style -command syntax:

C:\>powershell -command "& .\Cast-WizardSpell.ps1 -SpellList 'Ray of Frost','Light','Detect Magic'"
Casting Ray of Frost
Casting Light
Casting Detect Magic


Alternatively, one can adjust the parameter syntax, adding the ValueFromRemainingArguments attribute. However, for this to work, you can’t specifiy the parameter name.

C:\>powershell -file .\Cast-WizardSpell.ps1  "Ray of Frost" "Light" "Detect Magic"
Casting Ray of Frost
Casting Light
Casting Detect Magic
C:\local\scripts>powershell -file .\Cast-WizardSpell.ps1 -SpellList "Ray of Frost" "Light" "Detect Magic"
C:\local\scripts\Cast-WizardSpell.ps1 : A positional parameter cannot be found that accepts argument 'Light'.
+ CategoryInfo          : InvalidArgument: (:) [Cast-WizardSpell.ps1], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Cast-WizardSpell.ps1


I’m not thrilled with either of these options, because some person like me may come along and, in an effort to be helpful, may twiddle the command line, thinking we’re normalizing or updating the syntax, when we’re really breaking things. However, I think using the -Command invocation is the least surprising, most consistent implementation. I’ll just make notes in the script help and in the description of the scheduled task about the reason I’ve used that method.

# Renaming directories with invalid names

Somehow, a client managed to create several directories with names that ended with a period. However, File Explorer and other tools (i.e., backup) are unable to access the folder contents, getting an error that usually is interpreted as “The system cannot find the file specified.”
According to KB2829981, the Win32_API is supposed to remove trailing space and period characters. KB320081 has some helpful suggestions, and also indicates that some techniques allow programs to bypass the filename validation checks, and some POSIX tools are not subject to these checks.
I found that I was able to delete these problem folders by using rmdir /q /s “\\?\J:\path\to\bad\folder.” But I wanted to rename the folders in order to preserve any content. After flailing about for a while, including attempts to modify the folders using a MacOS Client and a third-party SSH service on the host, I was prodded by my colleague Greg to look at Robocopy.

1. I enabled 8dot3 file name creation on a separate recovery volume (I didn’t want to do so on the multi-terabyte source volume)
2. Using robocopy, I duplicated the parent folder containing the invalid folder names to the recovery volume, resulting in the creation of 8dot3 names for all the folders
3. I listed the 8dot3 names of the problem folders with dir /x
4. The rename command with the short name as a source and a valid new name

This fixed the folders, and let me access their contents. I then deleted the invalid folders from the source and copied the renamed folders into place.
It seems like a simple process, but I managed to waste most of a morning figuring this out. Hopefully, this may save someone else some time.

# Troubleshooting Offline Files

My previous post describes the normal operation of Offline Files. And most of the time, “it just works.” But there are times when it won’t, and getting it running again can be challenging.

## Two Important concepts

First, it’s important to understand that the Offline Files facility is providing a virtual view of the network folder to which Documents has been redirected when Windows detects that the network folder is unavailable. This means that, when Offline Files is really borked, users can see different things in their Documents folder depending one whether their computers are online or offline.
Second, Windows treats different names for the same actual server as if they are different servers altogether. Specifically, Windows will only provide the Offline Files virtual view for the path to the target network folder. You can see the target folder path in the Properties of the Documents folder.

The Location tab shows the UNC path to the target network folder.

For example, these two UNC paths resolve to the same network folder:

\\files.uvm.edu\rallycat\MyDocs

If the second path is the one that is shown in the Location tab in the properties of the Documents folder, then you will be able to access that path while offline, but not the first path.

## Show me the logs

There are event logs that can be examined. I’ll mention them, but I’ve rarely found them helpful in solving a persistent problem. If you want to get the client up and running again ASAP, skip ahead to the Fix it section.
There are some logging options available that can help in diagnosing problems with offline files. There are two logs that are normally visible in the Windows Event Viewer, under the Applications and Services logs heading:

• Microsoft-Windows-Folder Redirection/Operational
• Microsoft-Windows-OfflineFiles/Operational

# Folder Redirection and Offline Files

The following information is not new. We are in the process of making changes to our Folder Redirection policy, though, and I thought it might be helpful to have this baseline information in a place that is handy for referral.

## Background

Offline Files is a feature of Windows that was introduced in parallel with Folder Redirection in Windows 2000. Folder Redirection allows an administrator to relocate some of the user profile data folders to a network folder, which has the advantage of protecting that data from loss due to workstation issues like drive failure, malware infection, or theft. It also means you can access your data from multiple workstations.
The Offline Files facility provides a local cache of the redirected folder(s) so that mobile users can continue to work with the data in those folders when disconnected from the organization’s network. When the computer is connected to the network again, any changes to either the network folder or the local Offline Files cache are synchronized. Users are prompted to resolve any conflicting changes, e.g., the same file was modified in both places, or was deleted from one and modified in the other.

At UVM, we use Folder Redirection on the Documents folder (formerly My Documents in XP), as well as the Pictures, Video, and Music folders. Most of the time, the Offline Files facility works without issue. However, as with all technology, Offline Files can fail. There are circumstances that can result in the corruption of the database that Offline Files uses to track the sync status of files. Doing major reorganizing and renaming of files and folders, for example, seems to be a culprit. Another one is filling your quota; you can continue to save files to your local cache, but the files won’t get synced to the server because you’re out of space.

## How to sync your offline files

To manually synchronize your Offline Files with the target network folder, open the Sync Center by:

1. Going to the Start Screen (or menu) and typing sync center
2. Clicking the Sync Center item in the search results
 Windows 8.1 Start search for “sync center” Windows 7 Start search for “sync center”

or

1. Find theSync Center tray icon and double-click it, or
2. Right-click and select the Open Sync Center menu item

Menu for the Sync Center icon in the Windows system tray.

The Sync Center Window should appear.

Offline Files status in Sync Center

Note that the Offline Files item shows the time of the most recent sync operation. If you want to initiate a sync operation, click Offline Files and then click Sync.

A sync operation has completed.

If there are errors or conflicts that require intervention to resolve, those will be show in the result. A conflict result is shown below.

Sync operation with a conflict.

Click the N Conflicts link or View sync conflicts on the left to see details about the files in conflict.

Right-click or select and click ‘Resolve’.

Select each file conflict you want to resolve, and click Resolve or right-click the file and select View options to resolve.

Windows provides information about the files in conflict and provides several appropriate options.

In this scenario, a file has been deleted in one location, and modified while offline in the other. Since only the one file exists, there are only two options: delete the file, or copy it to both locations.

Another scenario involves a file have been modified both offline and online, probably while using multiple computers. In that case, the resolution Window offers three choices: pick the offline file (on this computer), pick the online version (on the network folder), or keep both by renaming one of them.

Sync Errors are handled differently, and may require the help of your IT support staff or the UVM Tech Team.

A sync operation with error.

To review the errors or conflicts, you can view the Sync Results.

Sync result, with detail for an error.

You can view details about an individual error by hovering over it with the mouse cursor. In the example above, my folder “2. Archive” is throwing an “Access is denied” error. To resolve an error like this, it may be necessary to contact the Tech Team. In some cases, it’s necessary to reset the Offline Files tracking database and essentially start over. This procedure is documented in a separate post, Troubleshooting Offline Files.

# GUID Chase – Group Policy troubleshooting

It started with an alert from System Center Operations Manager about a failed scheduled task. Of course, the alert references a task name that looks like a SID. Running schtasks /query show a few jobs with a status that warranted inspection. Looking at the Microsoft-Windows-TaskScheduler/Operational log I found that the task “\Microsoft\Windows\CertificateServicesClient\UserTask” is the one the failed and triggered the alert.
I also noted that there were some Group Policy processing errors occurring at about the same time as the task failure, including a problem applying the Group Policy Scheduled Tasks settings. And the failing task starts at user login.
Next, I ran gpresult /h to create a report of the GPOs and settings that applied, and any errors that were generated. The report confirmed that there were failures in applying the Group Policy Files settings and the Group Policy Scheduled Tasks settings.
Some web searching turned up this thread, among others, which pointed me to the Group Policy History files in C:\Users\All Users\Microsoft\Group Policy\History. This directory contained four subdirectories named with the GUIDs for the corresponding GPOs. I was able to find three of the four GPOs by inspecting the details in the GPMC, but I couldn’t find the fourth.
I decided to search more programmatically, and started with an LDAP search with ADFind:

adfind -f "&(objectClass=groupPolicyContainer)(Name={DC257675-89C1-5AA6-5F65-B5D5CFC35E17})"
0 Objects returned


Then, just to be sure, I used the PowerShell GroupPolicy module:

PS Z:\> import-module GroupPolicy
PS Z:\> get-gpo -guid "{DC257675-89C1-5AA6-5F65-B5D5CFC35E17}"


So I removed the subdirectory with that name from the GP History directory, and retried gpupdate /force. This time, it completed successfully.

# String arrays and mandatory parameters

I have been working on a function to convert the output of NET SHARE commands into usable PowerShell objects. In the course of my work, I was storing the output of the command in a variable, which I later pass into a parsing function. Curiously, the function I developed iteratively in the console worked fine, but when I dressed it up in my script, it failed:

test-array : Cannot bind argument to parameter 'foo' because it is an empty string.
At line:1 char:12
+ test-array $party + ~~~~~~ + CategoryInfo : InvalidData: (:) [test-array], ParameterBindingValidationException + FullyQualifiedErrorId : ParameterArgumentValidationErrorEmptyStringNotAllowed,test-array  I had verified that the variable was of type System.Array, and that it had string elements. After banging my head on it for a while, I decided to break out the parameter handling and test it separately. I wrote a quick function to accept and process the elements of a string array: function test-array { param( [string[]]$foo )
$i = 0 foreach ($line in $foo ) { write "[$i] $line"$i++
}
}


# VSS diagnostics

For the past eight month, I’ve been working with EMC and Microsoft to diagnose a problem. Several time a month, during the backup of our primary Windows 2008 R2 file server, all the VSS shadow copies get deleted for the volume containing all our shared departmental directories.

This has two major effects. First, it means that our clients no longer can recover files using the Previous Versions feature of Windows. Second, it casts significant doubt on the validity of the backups performed at that time, which EMC NetWorker reports as having completed successfully.

We have been unable to find a technical solution to the shadow copy loss, so we will be reconfiguring our storage and shared directories to accommodate the limitations of NetWorker. In the meantime, I want to note a few of resources that have been helpful in diagnosing problems with VSS (it will be easier to find them here than in my pile o’ email):