Nov 2nd, 12
You would think this one would be easy…
Some of our users were noticing that it was taking over 30 seconds to launch IE to a web site that was configured at the command line (i.e. we run “iexplore.exe https://mysite.com”). While page loading was indeed slow, at least 20 seconds of the delay was seen before IE even stated to load content from the web site in question (determined by using WireShark). Instead what we saw was a lot of CLDAP chatter and the ever-revealing DNS lookup attempt for hosts starting with the RDN “WPAD”. Looks like Internet Explorer Web Proxy Auto Detection is wasting our time again.
Traditionally, the solution was to use the “Internet Explorer Maintenance” settings in Group Policy to disable auto proxy config. However, it appears that with the release of Windows 8, this branch of Group Policy is being deprecated. So what it the “right” way to set this policy now? I am using GP Preferences.
Using SysInternals ProcMon, we are able to see that the following registry value is modified when we manually disable automatic proxy detection:
this is a RegBinary setting, which makes intuitive understanding of the meaning of the value impossible.
Though experimentation, I was able to determine that when the ninth pair of digits is altered from “09″ to “01″, proxy auto configuration is disabled. The parallel “SavedLegacySettings” value also gets modified, but this value has no effect on actual IE settings. The fifth pair of settings also gets modified each time you reconfigure the IE connection settings. I expect these digits represent some sort of change sequence number, as the setting different values here does not seem to affect a behavior change in IE.
So whoopee… another GP Preference setting, and I have saved my colleagues 20 seconds of time on each startup of IE. If they all use the affected systems once a day for the next 80 days, this whole activity will have been worthwhile.
May 3rd, 12
I got stupid again this week and decided to investigate management of Mac computers. Since we are both cheap and overworked, I wanted a solution that would be both free, and would not require any new infrastructure. The only options that came to mind were either to extend our OpenLDAP server to support apple schema, or to extend Active Directory. Since I am the “Windows Guy”, and since use of AD also brings benefits such as Kerberized login to file servers, support for NTLM auth, and support for File Sync, I went with the AD option.
Not fun, very messy, but it works. Here are the gotchas:
- Use the Lion Active Directory integration guide or later. Earlier guides might lead you to create a schema extension file that will, in turn, create invalid classes in your AD schema. The new guide works well.
- If your computer was bound to AD before the schema update, you likely will need to unbind/rebind before you will be able to take advantage of MCX settings in AD. The reason is that your apple hardware uuid and macAddress are not populated as part of your AD computer object until after the schema extension is done. Apple says you only need to restart the OpenDirectory daemon (“killall opendirectoryd” as root) to get the attributes populated, but I do not think this is accurate.
- If your domain/forest is at the 2008 or 2008 R2 level, you will need to run “dsconfigad -alldomains disable” after domain bind, or you will not be able to find apple computer-list objects in the domain. After running this command, you have to specify the AD domains to search using Directory Utility, or by running the following commands as root:
dscl lolcalhost -delete /Search CSPSearchPath “/Active Directory/CAMPUS/All Domains”
dscl localhost -append /Search CSPSearchPath “/Active Directory/CAMPUS/campus.ad.uvm.edu”
- The Lion+ AD plugin will look in computer objects in only one place in your domain… “CN=Mac OS X”. If you want to manage macs using computer lists you must create a “container” object (not an Org Unit) with this name in the root of your domain, and create your apple-computer-list objects there. You will need to use ADSI Edit to create the container and apple-computer-list objects, as ADUC does not allow for creation of these object types.
There is a lot more to learn here. One thing I am curious about is how we might be able to safely delegate Mac computer management, given that someone with control over a user group or computer group can add any user or computer to that group, and thus can impose settings on users and computers arbitrarily. One option might be to disable or constrain access to the apple extensions to the user and group objects, so that only select and highly trusted individuals can control user MCX settings. For computer-list objects, we might not be able to delegate management of the members attribute of the computer list object, but instead only delegate management of the MCX settings. Messy.
I also am curious to know how MCX settings are applied when multiple MCX settings are used on users, groups, computers, and computer-lists. Which policy takes priority? How is merging accomplished?
Ultimately, it is nice to know that Apple has provided a functional solution for Mac management in an AD domain. It may not be perfect, but it is a start. Perhaps when combined with “Munki”, or another Apple Software Update clone, we will really have something solid for Enterprise Management of the Mac.
Dec 6th, 11
Here we are, working with SCCM again. Making difficult things possible, and simple things difficult. Today we wish to distribute a SmartCard driver to all of our managed servers, so that we can require Smart Card for certain classes of logins. the newer “CNG” Smart Card minidrivers are all simple “.inf” driver packages that you can right-click install. This ought to be easy, thought the sys admin. Wrong!
Installation of inf drivers is not a well documented command line procedure (unlike the rather more complicated “.msi” package, which at least is easy to script).
My thanks goes out to the following bloggers and forum users for their assistance with this case:
The script that I cobbled together to install the Athena “ASECard” minidriver is displayed below. Note that this should work for pretty much any minidriver, as long as it has a “DefaultInstall” section in the inf file. I just unpack the amd64 and x86 driver cab files into their respective directories, put the batch script one directory above these, and make an SCCM software package of the whole thing. The installation command line is simply the batch file name.
REM Installs the drivers specified in the "DefaultInstall" section
REM of the aseMD.inf that is appropriate for the current (x86 or amd64) platform.
REM Install is silent (4 flag), with no reboot (N flag).
REM The INF is specified to be in the x86 or amd64 subdirectory
REM of the script directory (%~dp0).
echo Detecting platform...
IF EXIST "%programfiles(x86)%" (GOTO :amd64) ELSE (GOTO :i386)
echo Installing 32-bit driver...
%windir%\system32\rundll32.exe advpack.dll,LaunchINFSectionEx "%~dp0x86\aseMD.inf",DefaultInstall,,4,N
REM The command will run in 64-bit mode (%windir%\sysnative\),
REM when called from a 32-bit CMD.exe (as will be the case with SCCM).
echo Installing 64-bit driver...
%windir%\sysnative\rundll32.exe advpack.dll,LaunchINFSectionEx "%~dp0amd64\aseMD.inf",DefaultInstall,,4,N
REM End of file
Oct 14th, 11
We have been tracking a problem with some of our Operations Manager Server 2008 R2 agents. We have a pool of single CPU VMs that have been reporting “Operations Manager Agent CPU too high” alerts every ten hours or so (give or take a few hours). Unfortunately, I am not able to catch the agents while the CPU spike is taking place. Maybe I could set up a “Data Collector Set” to gather lots of process information when a CPU spike condition occurs, but I am feeling lazy and don’t want to do it.
So instead, I am taking a different approach… disabling non-essential discoveries to see if this lightens the load on the agents enough to stop the CPU spikes. I thought I knew how to do this already, but my fist pass failed, and I had to learn something new (gasp!). My thanks to Jonathan Almquist for his post on this subject:
Without that one, I would still be foundering.
I our case, I wanted to suppress discovery of System Center Configuration Manager 2007 Clients in the SCCM 2007 Management Pack. To accomplish this, we need to identify the pertinent discovery rules, create a group that contains the agents that we want to exclude from discovery, then override the discovery for this new group. We then can speed cleanup of the now obsolete discovered objects using the PowerShell “remove-disabledMonitoringObject” cmdlet.
- Go to the OpsMgr console, change to the the Authoring->Management Pack Objects->Object Discoveries view. Use the “change scope” option to limit the displayed discovery rules to only those in the Configuration Manager management packs. In this instance, we see there are rules for “Microsoft ConfigMgr 2007 Clients Discovery” and “Microsoft ConfigMgr 2007 Advanced Client Discovery”. I will disable discovery for both of these. Before moving on, take careful note of the “target” column. In this case the target is “MOM 2005 Backward Compatibility Computer”, not “Windows Computer”, as you might expect.
- Change to the Authoring->Groups view. Create a group that includes only objects of the type you identified in the first step. I used dynamic inclusion rules to add all entities that do not match the naming convention of our Configuration Manager servers.
- Now go back to the Object Discoveries view, find the rules you want to override again, and add an override for objects in your new group.
- You could wait a few discovery cycles for the discovered entities to go away, or just pop into the OpsMgr PowerShell console, and run “remove-DisabledMonitoringObject”. If you did your override rules properly, your undesirable objects should disappear right away.
I now have removed discovery and monitoring of the SCCM Client on all of the Windows Servers in my monitored environment. We now shall see if this makes the OpsMgr Agent CPU utilization alerts go away.
Mar 23rd, 11
We are piloting a deployment of SCCM 2007 R3 as part of our evaluation of Forefront Endpoint Protection 2010. I thought I would have SCCM up in a day to a day and a half… Ha! If you are planning to do something similar, schedule a good four+ days for initial configuration (unless you are the Windows equivalent of Bruce Lee).
- Complex PKI certificate requirements. You need to create a Windows PKI server template just to deploy one signing cert to the site management server! These certs cannot use the next-generation crypto (CNG) templates that came with Server 2008… you must use Server 2003 templates (CAPI).
- Logging shortcommings. I suppose veteran SCCM folks will think I am daft. After all, SCCM makes more logs that just about any other MS product. However, the logs are long on data, short on information. I wasted over a day troubleshooting client to management point communications that turned out to be related to permissions problems with a cert in the SCCM server system account’s “My” certificates store. The problem was that I used drag/drop in the cert MMC to install the cert, but that method did not set cert permissions properly. After exporting/importing the cert, then setting permissions as detailed here:
I was able to get IIS to bind reliably to the cert, and clients started to check in. The SCCM client and server logs were no help with this.
- Reporting Services – Since I last configured reporting on SQL 2005, things have gotten easier. However, RTM releases still are not reliable enough. I discovered we needed SQL 2008 R2 CU4 or later to get SCCM to work reliably with reporting services.
- Schema Extensions – Never fun. The process is well documented on Tech Net, but it’s still a pain.
- Server installation prerequisites – There are many prereqs for SCCM. The documentation lists them reliably. What is not mentioned is that the server role prereqs need to be installed simultaneously. If BITS, WebDAV, and ASP.NET are not installed at the same time, SCCM will fail to function after installation.
All that being said, the product has made great strides since I last looked at it (When it was called SMS 2003). Integration with WSUS is a plus, as is the “Advanced Client” which uses a simple client pull over HTTPS to fetch configurations and submit status. Good stuff… less dependency on RPCs and File/Print Sharing.