Posts Tagged ‘File Servers’

The case of the undeletable directory

I had a “fun” time today with some directories that I could not delete.  These were mandatory roaming profiles that I previously had attempted to upload to a file server using “robocopy”.  Owing to switches that I used when performing the upload, directory junctions were treated as files, and robocopy got caught in a recursive loop because of a circular reference to “Application Data”.  By the time I caught the problem, I had created a set of nested “Application Data” directories that must have been over 400 characters long.

All of the tricks I had used in the past to delete “deep” directories were failing me.  “Dir /s /q [dirName]” failed with “Access Denied”, even though I was the directory owner.  Running the command as System encountered the same problem.  I mapped a drive to the directory, as deep down in the nested folders as I could get.  From there, I was able to “CD” to the deepest “Application Data” directory, but I still could not delete the directory.  (I got a “directory not empty” error.)

Eventually, Google unveiled a suggestion to use “robocopy” to mirror an empty directory to the problematic directory.  (Unfortunately, I have lost track of the link, so I cannot give credit where it is due.)  “Good idea,” I thought.  Why not use the utility that created the problem to solve the problem?  After all, if robocopy can create paths that are 400 characters long, perhaps it can delete them, too.

To test, I simply created an empty directory “E:\foo”, and ran the command:
robocopy.exe /mir /e E:\foo E:\mandatory\corruptProfile

Robocopy quickly chewed though the problematic profile, and a few seconds later I had an empty directory that I was able to delete.  Hurray!

Windows file server aliases, NetBIOS over TCP, and Incomplete Documentation

It was inevitable… some people have started to notice that their bloody Multi-Function Devices (the overpriced beasts formerly known as “printer/scanner/copiers”) no longer can scan to our file server.  Why?  Because we got all smarty pants and tried to disabled “NetBT” (NetBIOS over TCP) for access to our new file server.  We figured “Hey, CIFS over TCP has been available since, what, 1999?  We won’t have any problems with back level clients!”.  Rubes.  Turns out a lot of MFDs still require NetBT.  Rather than be a pain, I have decided to enable NetBT on the new server.  After all, it is mainly the laggards that suffer the consequences of back-level clients (and besides, if I don’t do this I will either have to go update their firmware myself, or locate some mythological pile of gold to pay for new MFDs).

The tricky bit for us was that our clients are using a name other than the server’s primary name to access the server.  Modern clients don’t care.  SMB2-capable systems only require that the server name being used has a valid matching DNS entry, and if you want Kerberos to work, you need to add a matching Service Principal Name (SPN).  We tested that our a long time back, and had no SMB2 client issues at go live.  SMB1 is a bit more fussy… you need to tell the server not to get bent out of shape when the requested server name is not the primary server name by adding the “DisableStrictNameChecking” DWORD in the HLKM\System\CurrentControlSet\Services\LanManServer\Parameters registry key.  We picked up on that one on migration night.

But now, after activating NetBIOS over TCP, client systems still are giving us grief.  Upon connection, we can see the NetBIOS Session Service reporting “Called Name Not Present” in a wire trace.  What gives?  DNS, SPN, and WINS entries are not enough for good-old LanMan?  Sounds like the server needs some additional convincing.

It turns out the key is in “optional names”.  When I added a Reg_multi_sz entry named “OptionalNames” to the HLKM\System\CurrentControlSet\Services\LanManServer\Parameters registry key, populated it with the alias name of our file server, activated NetBIOS over TCP, then rebooted, I found that the server started granting connection requests on TCP Port 139 (the legacy NetBT CIFS port).  Horray!  As an added bonus, our windows server is now registering both its primary name and its supported alias in WINS and DNS… no need for static entries.  Now to see if the MFDs are satisfied…

It really is a pity that there is no one article on this subject in TechNet.  It appears that the folks at Microsoft have little belief that anyone would be so asinine as to want to have a file server with an alias, and support for CIFS over NetBT, and support for CIFS over TCP, and support for modern SMB2 clients.  I’ll bet a consultants would say “Gee, I have never seen this before”.  We get that a lot.

Articles that helped during this troubleshooting process:

Migrating from NetApp to Windows File Servers with PowerShell – part 2

Previously we saw how PowerShell and RoboCopy can be used to sync multi-terabyte file shares from NetApp to Windows. What I did not tell you was that this script choked and died horribly on a single share in our infrastructure. You may have seen it commented out in the previous script? “#,’R25′”?

CollegeNet Resource25… my old enemy. These clowns worked around a bug in their product (an inability to read an open text column in an Oracle DB table) by copying every text row in the database to its own file on a file server, and to make matters worse they copy all of the files to the same directory. Why is this bad? Ever try to get a directory listing on a directory with 480,000 1k files? It’s bad news. Worse, it kills robocopy. Fortunately, we have a workaround.

The archive utility “7-zip” is able to wrap up the nasty directory into a single small file, which we then can unpack on the new file server. Not familiar with 7-Zip? For shame! Get it now, it’s free:
http://www.7-zip.org/

7-zip ignores most file attributes, which seems to speed up the copy process a bit. Using robocopy, ouy sync operation would either run for hours on this single directory, or just hang up forever. With 7-zip, we get the job done in 30 minutes. Still slow, but better than never.

Troublesome files are found in the R25 “text_comments” directory, a subdirectory of “text”. We have prod, pre-prod, and test environments, and so need to do a few separate 7-zip archives. Note that a little compresson goes a long way here. When using “tar” archives, my archive was several gb in size. With the lowest level of compression, we squeeze down to only about 14 Mb. How is this possible? Well, a lot of our text comment files were empty, but uncompressed they still take up one block of storage. Over 480,000 blocks, this really adds up.

Code snippet follows.

#Sync R25 problem dirs

Set-PSDebug -Strict

# Initialize the log file:
[string] $logfile = "s:\r25Sync.log"
remove-item $logfile -Force
[datetime] $startTime = Get-Date
[string] "Start Time: " + $startTime | Out-File $logfile -Append

function zipit {
	param ([string]$source)
	[string] $cmd = "c:\local\bin\7za.exe"
	[string] $arg1 = "a" #add (to archive) mode
	[string] $arg2 = join-path -Path $Env:TEMP -ChildPath $($($source | `
		Split-Path -Leaf) + ".7z") # filespec for archive
	[string] $arg3 = $source #spec for source directory
	[string] $arg4 = "-mx=1" #compression level... minimal for performance
	#[string] $arg4 = "-mtm=on" #timestamp preservation - commented out for perf.
	#[string] $arg5 = "-mtc=on"
	#[string] $arg6 = "-mta=on"
	#invoke command, route output to null for performance.
	& $cmd $arg1,$arg2,$arg3,$arg4 > $null 
}

function unzipit {
	param ([string]$dest)
	[string] $cmd = "c:\local\bin\7za.exe"
	[string] $arg1 = "x" #extract archive mode
	[string] $arg2 = join-path -Path $Env:TEMP -ChildPath $($($dest | `
		Split-Path -Leaf) + ".7z")
	[string] $arg3 = "-aoa" #overwrite existing files
	#destination directory specification:
	[string] $arg4 = '-o"' + $(split-path -Parent $dest) + '"' 
	#invoke command, route to null for performance:
	& $cmd $arg1,$arg2,$arg3,$arg4 > $null 
	Remove-Item $arg2 -Force # delete archive
}

[String[]] $zips = "V3.3","V3.3.1","PROD\WinXp\Text"
[string] $sourceD = "\\files\r25"
[string] $destD = "s:\r25"

foreach ($zip in $zips) {
	Get-Date | Out-File $logfile -Append 
	[string] "Compressing directory: " + $zip | Out-File $logfile -Append 
	zipIt $(join-path -Path $sourceD -ChildPath $zip)
	Get-Date | Out-File $logfile -Append 
	[string] "Uncompressing to:" + $destD | Out-File $logfile -Append
	unzipit $(Join-Path -Path $destD -ChildPath $zip)
}

Get-Date | Out-File $logfile -Append 
[string] "Syncing remaining files using Robocopy..." | Out-File $logfile -Append
$xd1 = "\\files\r25\V3.3" 
$xd2 = "\\files\r25\V3.3.1" 
$xd3 = "\\files\r25\PROD\WinXP\text"
$xd4 = "\\files\r25\~snapshot"
$roboArgs = @("/e","/copy:datso","/purge","/nfl","/ndl","/np","/r:0","/mt:4",`
	"/b",$sourceD,$destD,"/xd",$xd1,$xd2,$xd3,$xd4)

& robocopy.exe $roboArgs

Get-Date | Out-File $logfile -Append 
[string] "Done with Robocopy..." | Out-File $logfile -Append

# Complete logging:
[datetime] $endTime = Get-Date
[string] "End Time: " + $endTime | Out-File $logfile -Append 
$elapsedTime = $endTime - $startTime
[string] $outstr =  "Elapsed Time: " + [math]::floor($elapsedTime.TotalHours)`
	+ " hours, " + $elapsedTime.minutes + " minutes, " + $elapsedTime.seconds`
	+ " seconds."
$outstr | out-file -Append $logfile

Migrating from NetApp to Windows File Servers with PowerShell – part 1

We are retiring our NetApp filer this year. It was nice knowing you, NetApp. Thank you for the no-hassle performance, agile volume management, and excellent customer support. We will not miss your insane pricing, and subtle incompatibilities with modern Windows clients.

In this multi-part series, I will be sharing PowerShell code developed to assist with our migration. In part one, we will look at bulk copy operations with RoboCopy. In part 2, we will look at a situation where RoboCopy fails to get the job done. In future parts, we will look at automated share and quota management and migration.

Migrating large amounts of data off a NetApp is not particularly straightforward. The only real option we have is to copy data off of the filer CIFS shares to their Windows counterparts. Fortunately, with the multi-threading power utility “robocopy” we can move data between shares pretty quickly. Unfortunately, robocopy only multi-threads file copy operations, not directory search operations. So, while initial data transfers with robocopy take place really quickly, subsequent sync operations are slower than expected. MS also released a utility called “RichCopy” whish supports multi-thread directory searching, but this utility is not supported by MS, and has some significant bugs (i.e. it crashes all the time). What to do?

PowerShell to the rescue! Using PowerShell jobs, we can spawn off a separate robocopy job for each subdirectory of a source share, and run an arbitrary number of parallel directory copies. With some experimentation, I determined that I could run ten simultaneous robocopy operations without overwhelming CPU or disk channels on the filer. Under this arrangement, or file sync Window has been reduced from almost 48 hours to a mere 2.5 hours.

Some tricky bits in the development of this script where:

  • PowerShell jobs and job queuing are critical to completing this script in a timely fashion. Syntax for “start-job” is tricky. See my post on backup performance testing for more comments on working with jobs.
  • Robocopy fails top copy a number of source files. This is mitigated though the use of the “/b” switch (backup mode).
  • The PowerShell cmdlet “receive-jobs” fails to capture output from a variety of job commands unless you assign the job to an object. To reliably capture the output of commands within our jobs, I needed to assign the jobs to our $jobs array.
  • I needed to do some post processing on the log file. In doing so, I needed to find UNC paths for our source filer “\\files”. It is important to remember that, when using regular expressions, “\” is the escape character. So, to match for “\”, we need to enter “\\”. To match for “\\” we need to enter “\\\\”, as in:
     get-content $logfile | select-string -Pattern "\\\\files" | ...
  • Initially I allowed the script to process only one top level directory at a time (i.e. Start with \\files\software, and only proceed to \\files\shared when “software” completes). The problem with this was, I was preventing the script from running an optimal job count. Furthermore, a single hung job could bring the whole script to a halt. To combat this, I start the script by building a master queue array “$q”, which holds all of the directories for which I am going to start a job. The result of using a master queue is a considerable improvement in sustained throughput.
  • When building an array with a loop (i.e. while…) you may have trouble with the first item added to the array if you do not initialize the array before starting to loop. In my case, I needed to initialize “[array]$jobs = @()” before using the array to hold job objects in the “while” loop. Failing to do so caused “$jobs” to become a single job object when the number of jobs was equal to one. Bad news, if you are expecting to use array properties such as $jobs.count, or to call in index of the object (i.e. $jobs[0]).
  • ISE programs like the native PowerShell ISE, or Quest PowerGUI make script development much easier. However, production environments are not the same as the debug environment, so keep these tips in mind:
    1. Log your script actions! Use lots of out-file calls. If you are feeling slick, you can enclose these in “if ($debug)” clauses, and set the $debug variable as a script parameter (which I did no do here).
    2. When running in production, watch the log file in real-time using “get-content -wait”. I know it is not a cool as the Gnu command “tail”, but it is close.
  • Scoping… careful of the “global” scope. Initially I modified the $jobs and $dc variables in the global scope from within the “collectJobs” function. This worked fine in my ISE and at the PowerShell prompt. However, when running as a scheduled task, these calls failed miserably. I changed the calls to use the “script” scope, and the script now runs as a scheduled task successfully.

Below is the script I developed for this job… it contains paths specific to our infrastructure, but easily could be modified. Change the “while ($jobcount -lt 10)” loop to set the number of simultaneous robocopy processes to be used by the script…

# FilerSync_jobQueue.ps1
# JGM, 2011-09-29
# Copies all content of the paths specified in the $srcShares array to 
# corresponding paths on the local server.
# Keeps data on all copy jobs in an array "$q".
# We will use up to 10 simultaneous robocopy operations.

set-psdebug -strict

# Initialize the log file:
[string] $logfile = "s:\files_to_local.log"
remove-item $logfile -Force
[datetime] $startTime = Get-Date
[string] "Start Time: " + $startTime | Out-File $logfile -Append

# Initialize the Source file server root directories:
[String[]] $srcShares1 = "adfs$","JMP$","tsFlexConfig","software","mca","sis","shared"`
	#,"R25"
	#R25 removed from this sync process as the "text_comments" directory kills
	#robocopy.  We will sync this structure separately.
[String[]] $srcShares2 = "uvol_t1_1$\q-home","uvol_t1_2$\q-home","uvol_t1_3$\q-home",`
	"uvol_t1_4$\q-home","uvol_t1_5$\q-home","uvol_t2_1$\q-home",`
	"vol1$\qtree-home"
	
[String[]] $q = @() #queue array

function collectJobs { 
#Detects jobs with status of Completed or Stopped.
#Collects jobs output to log file, increments the "done jobs" count, 
#Then rebuilds the $jobs array to contain only running jobs.
#Modifies variables in the script scope.
	$djs = @(); #Completed jobs array
	$djs += $script:jobs | ? {$_.State -match "Completed|Stopped"} ;
	[string]$('$djs.count = ' + $djs.count + ' ; POssible number of jobs completed in this colletion cycle.') | Out-File $logfile -Append;
	if ($djs[0] -ne $null) { #First item in done jobs array should not be null.
		$script:dc += $djs.count; #increment job count
		[string]$('$script:dc = ' + $script:dc + ' ; Total number of completed jobs.') | Out-File $logfile -Append;
		$djs | Receive-Job | Out-File $logfile -Append; #log job output to file
		$djs | Remove-Job -Force;
		Remove-Variable djs;
		$script:jobs = @($script:jobs | ? {$_.State -eq "Running"}) ; #rebuild jobs arr
		[string]$('$script:jobs.count = ' + $script:jobs.Count + ' ; Exiting function...') | Out-File $logfile -Append
	} else {
		[string]$('$djs[0] is null.  No jobs completed in this cycle.') | Out-File $logfile -Append
	}
}
	
# Loop though the source directories:
foreach ($rootPath in $srcShares1) {
    [string] $srcPath = "\\files\" + $rootPath # Full Source Directory path.  
	#Switch maps the source directory to a destination volume stored in $target 
    switch ($rootPath) {
        shared {[string] $target = "S:\shared"}
        software {[string] $target = "S:\software"}
        mca {[string] $target = "S:\mca"}
        sis {[string] $target = "S:\sis"}
        adfs$ {[string] $target = "S:\adfs"}
        tsFlexConfig {[string] $target = "s:\tsFlexConfig"}
        JMP$ {[string] $target = "s:\JMP"}
        R25 {[string] $target = "S:\R25"}
    }
    #Enumerate directories to copy:
	$dirs1 = @()
	$dirs1 += gci $srcPath | sort-object -Property Name `
		| ? {$_.Attributes.tostring() -match "Directory"} `
		| ? {$_.Name -notmatch "~snapshot"}
	#Copy files in the root directory:
	[string] $sd = '"' + $srcPath + '"';
	[string] $dd = '"' + $target + '"';
	[Array[]] $q += ,@($sd,$dd,'"/COPY:DATSO"','"/LEV:1"' )
	# Add to queue:
	if ($dirs1[0] -ne $null) {
		foreach ($d in $dirs1) {
			[string] $sd = '"' + $d.FullName + '"';
	    	[string] $dd = '"' + $target + "\" + $d.Name + '"';
			$q += ,@($sd,$dd,'"/COPY:DATSO"','"/e"')
		}
	}
}
foreach ($rootPath in $srcShares2) {   
    [string] $srcPath = "\\files\" + $rootPath # Full Source Directory path.
	#Switch maps the source directory to a destination volume stored in $target 
    switch ($rootPath) {
        uvol_t1_1$\q-home {[string] $target = "H:\homes1"}
        uvol_t1_2$\q-home {[string] $target = "I:\homes1"}
        uvol_t1_3$\q-home {[string] $target = "J:\homes1"}
        uvol_t1_4$\q-home {[string] $target = "K:\homes1"}
        uvol_t1_5$\q-home {[string] $target = "L:\homes1"}
        uvol_t2_1$\q-home {[string] $target = "M:\homes1"}
        vol1$\qtree-home {[string] $target = "J:\homes2"}
    }
    #Enumerate directories to copy:
	[array]$dirs1 = gci -Force $srcPath | sort-object -Property Name `
		| ? {$_.Attributes.tostring() -match "Directory"}
	if ($dirs1[0] -ne $null) {
		foreach ($d in $dirs1) {
			[string] $sd = '"' + $d.FullName + '"'
			[string] $dd = '"' + $target + "\" + $d.Name + '"'
			$q += ,@($sd,$dd,'"/COPY:DAT"','"/e"')
		}
	}
}

[string] $queueFile = "s:\files_to_local_queue.csv"
Remove-Item -Force $queueFile
foreach ($i in $q) {[string]$($i[0]+", "+$i[1]+", "+$i[2]+", "+$i[3]) >> $queueFile }

New-Variable -Name dc -Option AllScope -Value 0
[int] $dc = 0			#Count of completed (done) jobs.
[int] $qc = $q.Count	#Initial count of jobs in the queue
[int] $qi = 0			#Queue Index - current location in queue
[int] $jc = 0			#Job count - number of running jobs
$jobs = @()

while ($qc -gt $qi) { # Problem here as some "done jobs" are not getting captured.
	while ($jobs.count -lt 10) {
		[string] $('In ($jobs.count -lt 10) loop...') | out-file -Append $logFile
		[string] $('$jobs.count is now: ' + $jobs.count) | out-file -Append $logFile
		[string] $jobName = 'qJob_' + $qi + '_';
		[string] $sd = $q[$qi][0]; [string]$dd = $q[$qi][1];
		[string] $cpo = $q[$qi][2]; [string] $lev = $q[$qi][3]; 
		[string]$cmd = "& robocopy.exe $lev,$cpo,`"/dcopy:t`",`"/purge`",`"/nfl`",`"/ndl`",`"/np`",`"/r:0`",`"/mt:4`",`"/b`",$sd,$dd";
		[string] $('Starting job with source: ' + $sd +' and destination: ' + $dd) | out-file -Append $logFile
		$jobs += Start-Job -Name $jobName -ScriptBlock ([scriptblock]::create($cmd))
		[string] $('Job started.  Incrementing $qi to: ' + [string]$($qi + 1)) | out-file -Append $logFile
		$qi++
	}
	[string] $("About to run collectJobs function...") | out-file -Append $logFile
	collectJobs
	[string] $('Function done.  $jobs.count is now: ' + $jobs.count)| out-file -Append $logFile
	[string] $('$jobs.count = '+$jobs.Count+' ; Sleeping for three seconds...') | out-file -Append $logFile
	Start-Sleep -Seconds 3
}
#Wait up to two hours for remaining jobs to complete:
[string] $('Started last job in queue. Waiting up to three hours for completion...') | out-file -Append $logFile
$jobs | Wait-Job -Timeout 7200 | Stop-Job
collectJobs

# Complete logging:
[datetime] $endTime = Get-Date
[string] "End Time: " + $endTime | Out-File $logfile -Append 
$elapsedTime = $endTime - $startTime
[string] $out =  "Elapsed Time: " + [math]::floor($elapsedTime.TotalHours)`
	+ " hours, " + $elapsedTime.minutes + " minutes, " + $elapsedTime.seconds`
	+ " seconds."
$out | out-file -Append $logfile

#Create an error log from the session log.  Convert error codes to descriptions:
[string] $errFile = 's:\files_to_local.err'
remove-item $errFile -force
[string] $out = "Failed jobs:"; $out | out-file -Append $logfile
$jobs | out-file -Append $errFile
$jobs | % {$jobs.command} | out-file -Append $errFile
[string] $out = "Failed files/directories:"; $out | out-file -Append $errFile
Get-Content $logfile | Select-String -Pattern "\\\\files"`
	| select-string -NotMatch -pattern "^   Source" `
	| % {
		$a = $_.toString(); 
		if ($a -match "ERROR 32 ")  {[string]$e = 'fileInUse:        '};
		if ($a -match "ERROR 267 ") {[string]$e = 'directoryInvalid: '};
		if ($a -match "ERROR 112 ") {[string]$e = 'notEnoughSpace:   '};
		if ($a -match "ERROR 5 ")   {[string]$e = 'accessDenied:     '};
		if ($a -match "ERROR 3 ")   {[string]$e = 'cannotFindPath:   '};
		$i = $a.IndexOf("\\f");
		$f = $a.substring($i);
		Write-Output "$e$f" | Out-File $errFile -Force -Append
	}