29 April, 2017

Install patches remotely - OS

When you need to install a hotfix outside of your normal patching cycle and outside of the usual patching tool (be that WSUS, SCCM...whatever) a good way is to do it semi-manually running wusa.exe.
However, if you have 20+ hosts, it might be a bit inconvenient to RDP to each and execute wusa or double-click on the msu file.


So why not use PS remoting or winrs? Let's give it a go (I copied the msu file to each host into their c:\temp folder prior to running winrs):


gc c:\hostlist.txt | %{winrs -r:$_ wusa.exe c:\temp\windows6.1-kbXXXXXXX.msu /passive /quiet /forcerestart}


But then you will get this: Windows update could not be installed because of error 2147942405 "Access is denied.


Whaaaat? Why? WHY? A bit of googling will get you here: https://support.microsoft.com/en-us/help/2773898/windows-update-standalone-installer-wusa-returns-0x5-error-access-denied-when-deploying-.msu-files-through-winrm-and-windows-remote-shell


Ok, I will need to extract the msu and then run dsim on each host. It's no big deal, you can run the 2 commands in a remote PS session, or run 2 lines of winrs...etc.. But here is an alternative solution, psexec:


"host1,host2,host3".split(",") | %{start-proces psexec.exe -arg "-s \\$_ wusa.exe c:\temp\windows6.1-kbXXXXXXX.msu /passive /quiet /forcerestart"}


However, you can't just run psexec as it gets hung on wusa.exe, you need to kick it off with 'start-process' and then pass the rest of the arguments to it which will be for psexec which then executes wusa.exe under SYSTEM context.


Hope this helps.
t



26 March, 2017

Split Array into smaller arrays - Scripting

Have you had a case when you had a long list of things and you wanted to split it up? Like a long shopping list which you really didn't want to tackle alone. Or you have a long list of IP addresses or host names, or object in Active Directory...whatever. And you want to split the long list to many smaller lists to run things in parallel or to stagger the running of a script (make sure you only run a script on a smaller batch at a time instead of running it on the full list).

So let's split a long list of host names to 6 pieces. You can do this in many ways, but I use this oneliner: 

$NumberOfArrays = 6; $NewArrays = @{}; $i = 0;  gc c:\hostlist.txt | %{$NewArrays[$i % $NumberOfArrays] += @($_); $i++}; $NewArrays


It uses a few tricks.

The first one is the modulus operator (%) which spits out the reminder part when you divide two integers. That will mean if you do a mod operation on a running number (from 1 to the max count of the list) with the number you want to split the list to, it will repeat a sequence of numbers. From 0 until it reaches the number one less than the number we are dividing by. Like this:
Lift of index numbers we can use for splitting the list


The second trick is that this generated repeating sequence of numbers are used as index (or keys) of a hash table. Each key in the hash table has a value, in this case the values of keys will the split list of hosts. The hash table here is $NewArrays which has the arrays of the smaller lists:
The list of smaller lists (arrays) in a hash table

Note the 'Name' column which is the list of keys.

To pull each list one by one, you can address each key with the .Item parameterized property.


Add caption
Or like this:

0..5 | %{$NewArrays.item($_)}


t









19 February, 2017

Random password generator - Scripting

These days everything in shops are handcrafted. It's fashionable to buy handcrafted yogurt, honey, jam...beer... why not have your very own homemade password generator?!

It's actually useful to put into scripts which create user accounts in e.g. Active Directory. You can have the given user phone the support team to reset the password when they want to start using the account - so no one knows the password until then.

You really want long (15 characters long) complex passwords because you never know how long a newly created account will sit around waiting for the user to reset the password of it. You can read many books about it why complex and random passwords are needed.

I tell you the whole code upfront and then explain the bits, like in every Columbo episode, they show you the buildup and the murder and then Columbo solves the puzzle in front of your eyes.... Peter Falk was awesome in that character!

The random password generator itself:
[string]::join("",((48..57) + (65..90) + (97..122) | Get-Random -count 15 | %{[char]$_}))

The bits of the oneliner

Passwords need characters, the easiest way to generate some is from the ASCII table. In Powershell you can generate a list of numbers on the fly when you define an array, these can be the ASCII codes of characters:
  • ASCII codes for all lowercase letters: (97..122)
  • ASCII codes for all uppercase letters: (65..90)
  • ASCII codes for all numbers from 0 to 9: (48..57)
You can see there are gaps in the list of ASCII codes of lowercase, uppercase letters and numbers. To make these 3 arrays of numbers look like one array, just add them up together:
(48..57) + (65..90) + (97..122)


Array of ASCII numbers for random password


Nice, we have a list of ASCII codes for all characters we want to chose from, let's pick random ones, 15 of them:
Get-Random -count 15


Then pipe these through to a foreach loop and convert the numbers to characters:
%{[char]$_}

Taking 15 random numbers of the ASCII arrays and converting to characters


Good, we have a random list of characters in an array, but I need it in one long string so I can use it as a real password. For this purpose, we can use the join function of the [string] type with no delimiter:
[string]::join("",...

There you go, you now have your handcrafted random password generator.


28 January, 2017

Parse user name from events - OS

Have you ever needed to parse fields or text from Windows Event Log? Even if the answers is no, read on. You never know when someone stops you on the road and holds a gun to your head and shouts until you parse all the fields from those 100 000 events.


My example was just about parsing user names from events in the Application log in a very friendly situation - no guns and shouting involved...
There were 300 000+ of these events in the last 24 hours and I had to find out from them the list of users with connection to the app and a connection count for each user in the same time frame.


This is how the events I'm parsing look like:

Event with User name to parse and count


First attempt

I used Get-WinEvent and filtered in its hash based switch as much as possible to speed up the query for events. Something like:
$timelimit=(Get-Date).addhours(-24);
Get-WinEvent -ea SilentlyContinue -FilterHashtable @{LogName="Application"; StartTime=$timelimit; id=998}


The events will tell me the SID (Security IDentifier) of the user who generated the event, to look up the user name I used the SecurityIdentifier and then the NTAccount classes of System.Security.Principal:
Select @{e={(New-Object System.Security.Principal.SecurityIdentifier($_.userID)).Translate([System.Security.Principal.NTAccount])}; l="User";}


The trick to make this a one-liner was to generate the user name on the fly when creating a custom object with 'Select' in the command


Full command and the measure of run time:
measure-command {$timelimit=(Get-Date).addhours(-24); $tmp=Get-WinEvent -ea SilentlyContinue -FilterHashtable @{LogName="Application"; StartTime=$timelimit; id=998} | Select @{e={(New-Object System.Security.Principal.SecurityIdentifier($_.userID)).Translate([System.Security.Principal.NTAccount])}; l="User";} | group user}


Days              : 0
Hours             : 0
Minutes           : 41
Seconds           : 0
Milliseconds      : 11

Second attempt - with regex parsing


Well, 41 minutes doesn't look like a lot of time when you are spending with your loved ones, but it's way above the limit of the patience of an average IT guy... Remember the first part of this article? I was lucky enough with this application that it logged the user name into the message of the event (with some other random text as well).


So let's try something different:
measure-command {$timelimit=(Get-Date).addhours(-24); $tmp=Get-WinEvent -ea SilentlyContinue -FilterHashtable @{LogName="Application"; StartTime=$timelimit; id=998} | Select @{e={$matches=$null;$a=($_.message -match "UserName: (?<users>[^`n]+)"); $matches.user}; l="User";} | group user}


Days              : 0
Hours             : 0
Minutes           : 8
Seconds           : 13
Milliseconds      : 54


Very nice, 80% reduction on run time. What's different on this run?
The reason why this is quicker because the command does not read into Active Directory to look up a SID and translate it to user name, instead, it just parses the user name from the message field of the event with a regex pattern:
$matches=$null;$a=($_.message -match "UserName: (?<users>[^`n]+)"); $matches.user



Hope this helps saving a bit of time for some of you and can spend this away from the computer thinking about the next powershell trick!


t

14 May, 2016

Find target of DFS links - OS

There are many times when people need to know the target folder of a DFS link. However, when you have multiple levels of DFS namespaces, for example, you have a Domain Based root called \\tatooine.com\root and under it there are DFS folders and links, but one link points to a standalone DFS root, e.g.: \\tatooine\root\cities\moseisley and this pointer points to another DFS root: \\c3podc1\root and then there are folders and pointers under this namespace as well.

It takes many clicks on DFS Management console to open each DFS namespace and root and walk trough them. Alternatively, you can use the DFSN module on Windows Server 2012 but the command Get-DFSNFolderTarget doesn't understand the multi level structure, it can only wok in one given DFSN root.

Let's take an example, we want to know which file server and which folder the following path points to : \\tatooine.com\root\cities\moseisley\cantina\stock
In the DFS management console, you have to
  1. add \\tatooine.com\root namespace for display
  2. drill down to the MosEisley folder to find that it actually points to \\c3podc1\moseisley
  3. so then you add \\c3podc1\moseisley DFS root to the console
  4. drill down to the Stock folder under Cantina and find that it points to \\fileserver\share1\documents folder

multi-level dfs roots
Find a target in a multi-level DFS structure on DFS Management Console



















It's not very rewarding after you get the 5th of these requests from users...

Here is a quick script which can walk through the folder structure, just specify the full path of a folder and it will tell you for each level where it points to, the script requires the DFSN PS module installed on the box - included in the DFS Management Tools Feature.
If the given folder level is just a DFS folder without target, obviously it will show and empty target, e.g.:

DFS lookup with powershell


On the above picture:

  • \\tatooine.com\root\cities is just a folder because it doesn't point anywhere
  • \\tatooine.com\root\cities\moseisley points to another DFS root
  • \\c3podc1\moseisley\Cantina is again just a folder without DFS target
  • \\c3podc1\moseisley\Cantina\stock is the pointer we wanted to look for and it points to \\fileserver\share1\documents


Here is the code:
 param([string]$folderpath = "")  
   
 $erroractionpreference = "silentlycontinue"  
 Import-module DFSN
 $arr = $folderpath.split("\\")  
 $basepath = ("\\" + $arr[2] + "\" + $arr[3])  
 $testpath = $basepath  
   
 # go from the 4th element of the path and try to get the target of each
 for($i = 4; $i -lt $arr.count; $i++){  
    $testpath = $testpath + "\" + $arr[$i]  
    $result = get-dfsnfoldertarget $testpath  
    $testpath + "->" + $result.targetpath  
   
    if($result.targetpath){  
       $testpath = $result.targetpath  
    }  
 }  



17 January, 2016

Search host name based on IP from Forward Lookup zone - DNS

There are times when the need calls for very ugly but practical workarounds. This time, we want to know the DNS name of a host, we only know the IP address and we don't have reverse lookup zones.

In this case, we can list all the A records from the forward lookup zone and search for the IP address we know.


Use WMI

If you have admin rights on the DNS server, you can use WMI:
gwmi -Namespace root/microsoftDNS -q "select * from MicrosoftDNS_AType where recorddata='10.1.1.123'" | select Ownername,recordData


Use dnscmd

However, if you don't have any sort of permissions, you can try to use dnscmd to enumerate all records from the given zone and then use powershell to search for the IP, then do some text parsing to get a proper output:



A bit of explanation:
  • $zoneContent = dnscmd $dnsserver /enumrecords $dnsDomain . /continue
    Get the full list of records from the given zone
  • if($item -match "$ip"){...
    Go through each line in the output and if the given line contains the IP you are looking for, start processing the data
  • if($item -match "^  "){
    If the line starts with spaces, that means it will have an IP which belongs to a host with multiple IPs, so we will need to list the previous line as well
  • $aging = $($tmp=$zoneContent[$k-1] -match "aging:(?<number>[^\]]+)"; $matches.number)
    $timestamp = (Get-Date ("1601/01/01 00:00")).addhours($aging)
    Calculate the time stamp of the record from the Aging number (which is the number of hours from 1st Jan 1601
  • New-Object -TypeName psobject -Property @{"IP"=$ip; Host=($zoneContent[$k-1].split(" ")[0]); timestamp=$timestamp}
    Put the data into an object and throw it to the std out
The sample script in full:








 $ip = "10.1.1.122"  
 $dnsServer = "c3podc1"  
 $dnsDomain = "tatooine.com"  
   
 $zoneContent = dnscmd $dnsserver /enumrecords $dnsDomain . /continue  
 $k = 0  
   
 Foreach($item in $zoneContent){  
    if($item -match "$ip"){  
       # if the host has 2 IPs and we searched for the 2nd one, we will need the previous line from the output  
       if($item -match "^ "){  
          $aging = $($tmp=$zoneContent[$k-1] -match "aging:(?<number>[^\]]+)"; $matches.number)  
          $timestamp = (Get-Date ("1601/01/01 00:00")).addhours($aging)  
          New-Object -TypeName psobject -Property @{"IP"=$ip; Host=($zoneContent[$k-1].split(" ")[0]); timestamp=$timestamp}  
            
          $aging = $($tmp=$item -match "aging:(?<number>[^\]]+)"; $matches.number)  
          $timestamp = (Get-Date ("1601/01/01 00:00")).addhours($aging)  
          New-Object -TypeName psobject -Property @{"IP"=$ip; Host=($zoneContent[$k-1].split(" ")[0]); timestamp=$timestamp}  
       }  
       else{  
          $aging = $($tmp=$item -match "aging:(?<number>[^\]]+)"; $matches.number)  
          $timestamp = (Get-Date ("1601/01/01 00:00")).addhours($aging)  
          New-Object -TypeName psobject -Property @{"IP"=$ip; Host=($item.split(" ")[0]); timestamp=$timestamp}  
       }  
    }  
    $k++  
 }  
   

23 November, 2015

Which DNS records would be scavenged - AD

In connection with a previous post on listing DNS scavenging settings, I thought I'd post those couple of lines of codes which gave me confidence before turning on or modifying scavenging settings - before I turn on any automation which only exists to delete stuff from production environment I always have a second thought when I put my head on the pillow, "did I really set the right values or will the phone ring in 2 hours waking me up and telling me there's a DNS outage?"

To make it a bit more scientific than "close your eyes and click OK", here is a couple of lines of PS which can help you identify all records from a DNS zone which would be deleted based on your thresholds.
  • Set parameters, DNS server name, the DNS zone and the age threshold which specifies how many days older records should be deleted. Scavenging has a 7 + 7 days "No-refresh" + "Refresh" interval, so records older than 14 days will potentially be deleted when scavenging process runs:
    #set parameters
    $server = "c3podc1"
    $domain = "tatooine.com"
    $agetreshold = 14
  • Threshold in hours from Microsoft's beginning of time definition (1st Jan 1601):
    # calculate how many hours is the age which will be the threshold
    $minimumTimeStamp = [int] (New-TimeSpan -Start $(Get-Date ("01/01/1601 00:00")) -End $((Get-Date).AddDays(-$agetreshold))).TotalHours
  • Enumerate all records older than the time threshold
    # get all records from the zone whose age is more than our threshold $records = Get-WmiObject -ComputerName $dnsServer -Namespace "root\MicrosoftDNS" -Query "select * from MicrosoftDNS_AType where Containername='$domain' AND TimeStamp<$minimumTimeStamp AND TimeStamp<>0 "
  • List the records and the time stamps
    # list the name and the calculated last update time stamp
    $records | Select Ownername, @{n="timestamp";e={([datetime]"1.1.1601").AddHours($_.Timestamp)}}
The output should look like this:
DNS records with time stamps


The full script:
 #set parameters  
 $dnsServer = "c3podc1"  
 $domain = "tatooine.com"  
 $agetreshold = 14  
   
 # calculate how many hours is the age which will be the threshold  
 $minimumTimeStamp = [int] (New-TimeSpan -Start $(Get-Date ("01/01/1601 00:00")) -End $((Get-Date).AddDays(-$agetreshold))).TotalHours  
   
 # get all records from the zone whose age is more than our threshold   
 $records = Get-WmiObject -ComputerName $dnsServer -Namespace "root\MicrosoftDNS" -Query "select * from MicrosoftDNS_AType where Containername='$domain' AND TimeStamp<$minimumTimeStamp AND TimeStamp<>0 "  
   
 # list the name and the calculated last update time stamp  
 $records | Select Ownername, @{n="timestamp";e={([datetime]"1.1.1601").AddHours($_.Timestamp)}}  
   



t


08 November, 2015

List DNS scavenging settings on multiple servers remotely - AD

DDNS (Dynamic DNS where clients register their own DNS records) was a very good idea when it was published in RFC2136 and had been missing like a slice of bread, but it inevitably left some questions on the table. For example, if I let 100 000 hosts register their own records, who will tell them to clean-up their stuff if they don't need it anymore? On the other hand, if I don't use DDNS and I have only one DHCP server registering addresses, I can just regulate that one guy and tell it off if it doesn't cleanup its rubbish.

The answer is DNS scavenging which would be the butter on that slice of bread just to make it taste better and proper. But we want to make sure that the butter we put on the bread is not rotten. Otherwise we would need to throw the bread to the bin with the butter... ok, enough of this nonsense.

DNS scavenging essentially deletes stale / old records from the given DNS Zone. What we want to make sure that the scavenging process is properly configured otherwise we could end up losing very valuable DNS records and cause outages - believe me, you don't want an outage caused by something as fundamental as DNS.

There are a couple of rules we need to keep in mind:
  • The scavenging intervals have to be thought through - I'd go with the default settings 7+7 days
  • There should be only 1 DNS server scavenging the zone regularly even if we have lots of e.g. Domain Controllers hosting the zone.
  • The zone should be restricted and only that one server should be allowed to scavenge the zone. You can read more about scavenging e.g. here and here 
Question: if I have 100 domain controllers hosting an AD integrated zone how can I check if there's only one set to scavenge the zone. To get these settings from one server, you can use dnscmd /info. To do it on multiple servers you can do some dnscmd output parsing in powershell, e.g.:

Create an object where you will store the name of the DNS host, the scavenging interval set on that server and the default aging state on that server:
$sObject = "" | select hostname,ScavengingInterval,LastScav,DefaultAgingState

Take the output of dnscmd /info and go through each line:
dnscmd $srv /info | %{

If it's the line where scavenging info is stored, do some regex matching to take out the bits you need:
if($_ -imatch "last scav"){
$value = ([regex]::Match($_, "= .+$")).value -replace "= ",""

Add it to your output object:
$sObject.LastScav = $value

It will show you an output like this:
Note the date and result of the last scavenging run









The full script:
 # get the list of Windows DNS servers from the pipe  
 $hostlist = @($Input)  
   
 $hostlistlength = ($hostlist | measure).count  
   
 # go through each host  
 foreach($srv in $hostlist){  
    $sObject = "" | select hostname,ScavengingInterval,LastScav,DefaultAgingState  
   
    # run dnscmd to get the detailed info of each DNS server  
    dnscmd $srv /info | %{  
       $value = $null  
   
       # pick out the data from dnscmd output with regex matches  
       if($_ -imatch "last scav"){  
          $value = ([regex]::Match($_, "= .+$")).value -replace "= ",""  
          $sObject.LastScav = $value  
          $value = $null  
       }  
       elseif($_ -imatch "ScavengingInterval"){  
          $value = ([regex]::Match($_, "= .+$")).value -replace "= ",""  
          $sObject.ScavengingInterval = $value  
          $value = $null  
       }  
       elseif($_ -imatch "DefaultAgingState"){  
          $value = ([regex]::Match($_, "= .+$")).value -replace "= ",""  
          $sObject.ScavengingInterval = $value  
          $value = $null  
       }  
    }  
    $sObject  
 }  
   




23 June, 2015

Find PDC in a domain (not just current domain) - AD

Here's a question for a nice outage on a quiet spring evening, you've got issues with a PDC, you want to make sure you check out all the PDCs in all your Active Directory domains, how can you find the PDCs quickly in each domain without running nltest query fsmo against all domains one by one?

Side note: The Active Directory Domain Controller which holds the PDC (Primary Domain Controller) role in a Domain is not uber-critical to be up and running in every minute, but if it's up and not performing well...well that's a different issue. If the PDC is down, that only means the password change sync is slow, so users may get weird behavior when changing password and using that against another domain controller in a short period of time. The other roles like primary time source, GPO and DFS master server in case of replication conflicts...etc. are not massively important.
The biggest issue is when it's sporadically responding or sometimes doesn't, that makes the behavior of applications contacting the PDC unpredictable and hard to troubleshoot.

First step, let's list our PDCs in all domains. If you have several forests and domains it's not that easy as it sounds first. There are several ways and if you have Windows 2012 R2 with AD tools installed, it's fairly easy:

PS C:\> Import-Module ActiveDirectory
PS C:\> (Get-ADForest -identity tatooine.com).domains | %{(Get-ADDomain -server $_).PDCEmulator

But if you want to make sure you have a script which gets the PDC of a specified domain without dependency on the ActiveDirectory PowerShell Module, here are two native ways to do it


With a normal LDAP search:
PS C:\> $domainDN = "dc=tatooine,dc=com"
PS C:\> $searchRoot = new-object System.DirectoryServices.DirectoryEntry(LDAP://$domainDN)
PS C:\> $searcherObj = new-object Systen.DirectoryServices.DirectorySearcher
PS C:\> $searcherObj.Filter = ("(objClass=top)")
PS C:\> $tmpstr = $searcherObj.SearchRoot.Properties.Item("fsmoroleowner").Value
PS C:\> $pdc = $tmpstr.split(",")[1].split("=")[1]


Via .net Directory context:
PS C:\> $domainFQDN = "tatooine.com"
PS C:\> $context = new-object System.DirectoryServices.ActiveDirectory.DirectoryContext("Domain",$domainFQDN)
PS C:\> $domain = [System.DirectoryServices.ActiveDirectory.Domain]::GetDomain($context)
PS C:\> $domain.pdcRoleOwner


Advanced stuff, list all PDCs of all domains in 1 forest:
PS C:\> $ForestRootDomainFQDN = "tatooine.com"
PS C:\> $context = new-object System.DirectoryServices.ActiveDirectory.DirectoryContext("Forest",$ForestRootDomainFQDN)
PS C:\> $forest = [System.DirectoryServices.ActiveDirectory.Forest]::GetForest($context)
PS C:\> $forest.Domains | %{$_.pdcRoleOwner.name}

You've got the names of the PDCs, so you can go and fix them ;)

t

06 April, 2015

Verify forward and reverse DNS records - OS

DNS is a fairly simple and usually reliable service - when it works. People don't really think about it unless there are weird issues with an application and they spent 2 days troubleshooting it and just because they want to make sure they looked at everything, they check DNS and it turns out that those weird issues are in fact caused by e.g. missing reverse records.

These issues manifest in applications behaving strangely often providing uselessly generic errors messages or even worse, some misleading ones. After a very tedious issue over a weekend which required repeated checkouts across a globally distributed AD integrated DNS zone, I thought I wouldn't do this manually all the time with excel magic but via a script.

Steps needed to be made:
  • Take a list of server names (ideally FQDNs)
  • Perform a forward lookup to get the IP address(es) of each server
  • Take those IPs and perform a reverse lookup on them
  • Mark each host and IP with error if either the forward or reverse lookups fail or the reverse lookup provides a different hostname than the server name provided

Forward lookup (filtering on IPv4 only):
[array]$IPAddresses = [System.Net.Dns]::GetHostAddresses($obj.ComputerName) | ?{$_.AddressFamily -eq "InterNetwork"} | %{$_.IPAddressToString}


Reverse lookup:
$tmpreverse = [System.Net.Dns]::GetHostByAddress($_).HostName

Output:






The full script (simplified version):

 $hostlist = @($input)  
   
 # running through the list of hosts  
 $hostlist | %{  
      $obj = "" | Select ComputerName,Ping,IPNumber,ForwardLookup,ReverseLookup,Result  
      $obj.ComputerName = $_  
   
      # ping each host  
      if(Test-Connection $_ -quiet){  
           $obj.Ping = "OK"  
     $obj.Result = "OK"  
      }  
      else{  
           $obj.Ping = "Error"  
     $obj.Result = "Error"  
      }  
        
      # lookup IP addresses of the given host  
      [array]$IPAddresses = [System.Net.Dns]::GetHostAddresses($obj.ComputerName) | ?{$_.AddressFamily -eq "InterNetwork"} | %{$_.IPAddressToString}  
   
      # caputer count of IPs  
      $obj.IPNumber = ($IPAddresses | measure).count  
        
      # if there were IPs returned from DNS, go through each IP  
   if($IPAddresses){  
     $obj.ForwardLookup = "OK"  
   
        $IPAddresses | %{  
             $tmpreverse = $null  
                  
                # perform reverse lookup on the given IP  
             $tmpreverse = [System.Net.Dns]::GetHostByAddress($_).HostName  
             if($tmpreverse){  
                  
                     # if the returned host name is the same as the name being processed from the input, the result is OK  
                  if($tmpreverse -ieq $obj.ComputerName){  
                       $obj.ReverseLookup += "$_ : OK `n"  
                  }  
                  else{  
                       $obj.ReverseLookup += "$_ different hostname: $tmpreverse `n"  
                       $obj.Result = "Error"  
                  }  
             }  
             else{  
                  $obj.ReverseLookup = "No host found"  
                  $obj.Result = "Error"  
             }  
     }  
      }  
      else{  
           $obj.ForwardLookup = "No IP found"  
           $obj.Result = "Error"  
      }  
        
      # return the output object  
      $obj  
 }  


t


13 January, 2015

Move page file remotely - OS

With many servers in the environment, there are inevitably a bunch of them which are old, they have small disk capacity but they are needed "just for another couple of months" until the new system is up... and a couple of years later you are still the chosen one to keep them alive.

There are many challenges with these old boxes, so let me pick one: there's not enough disk space on drive C: and there's not much more to delete anymore. Last resort: move the pagefile to another drive (if there's one in the box). Let's script it to be able to make this change on many servers remotely.

Need to check a couple of things:
  • the original page file's details
  • target drive exists
  • it has sufficient space to accommodate the page file (based on MaximumSize)
  • target drive is local and is not hosted on SAN

Create the output object and get data of the current page file:
$obj = "" | Select ComputerName,OldPageFile,OldInitSize,OldMaxSize,Result
$obj.ComputerName = $srv
$pagefiledata = gwmi -ComputerName $srv -Class Win32_PageFileSetting

Get details of the volume where we want to move the page file (check whether it exists, whether there's enough space on it - here the limit is the size of the current page file twice)
$targetvolume = gwmi -ComputerName $srv -Class Win32_LogicalDisk -Filter "name='$movetodrive'"

Create new page file:
$result = Set-WmiInstance -ComputerName $srv -Class Win32_PageFileSetting -Arguments @{name="$targetfilename";InitialSize=$obj.OldInitSize;MaximumSize=$obj.OldMaxSize;}

Sometimes - especially on Windows Server 2003 - the Set-WmiInstance can't create the page file if the InitialSize and MaximumSize are set in the same command, for that the workaround is to create the page file as system managed (without additional parameters) and set the size afterwards.
$result = Set-WmiInstance -ComputerName $srv -Class Win32_PageFileSetting -Arguments @{name="$targetfilename"}

if($result.name -eq $targetfilename){
  $newPGFile = gwmi -ComputerName $srv -Query "Select * from Win32_PageFileSetting where name '$($targetfilename.replace("\","\\"))'"
  $newPGFile.InitialSize = $obj.OldInitSize
  $newPGFile.MaximumSize = $obj.OldMaxtSize
  $FinalResult = $newPGFile.Put()
}


If we do this, the script should check all the parameters we set and the existence of the new page file as part of error handling:
$checkoutPGFile = gwmi -ComputerName $srv -Query "Select * from Win32_PageFileSetting where name '$($targetfilename.replace("\","\\"))'"
if(!($checkoutPGFile -and ($checkoutPGFile.Maximumsize -eq $obj.OldMaxSize) -and ($checkoutPGFile.Maximumsize -eq $obj.OldMaxSize))){...

If all is good with the new page file, we can get rid of the old one, of course this will require a reboot but I decided to leave it to the scheduled reboots of the servers.
$oldPageFileObject = gwmi -ComputerName $srv -Query "Select * from Win32_PageFileSetting where name = '$searchString'"
$oldPageFileObject.Delete()

The full script which can take a list of hosts from the pipe, does all the checks looks like this (obviously I stripped out all the logging and fancy stuff so the essence of it is easier to see):
   
 $hostlist = @($input)  
   
 $objColl = @()  
 $movetodrive = "D:"  
 $targetfilename = $movetodrive + "\pagefile.sys"  
   
 foreach($srv in $hostlist){  
    $obj = "" | Select ComputerName,OldPageFile,OldInitSize,OldMaxSize,Result  
    $obj.ComputerName = $srv  
   
    $pagefiledata = gwmi -ComputerName $srv -Class Win32_PageFileSetting  
      
    if($pagefiledata){  
       $obj.OldPageFile = $pagefiledata.name  
       $obj.OldInitSize = $pagefiledata.Initialsize  
       $obj.OldMaxSize = $pagefiledata.Maximumsize  
         
       $targetvolume = gwmi -ComputerName $srv -Class Win32_LogicalDisk -Filter "name='$movetodrive'"  
         
       if(!$targetvolume){  
          $obj.Result = "$movetodrive does not exist"  
          $objColl += $obj  
          Continue  
       }  
         
       if(($targetvolume.Freespace / 1MB) -lt ($obj.OldMaxSize * 2)){  
          $obj.Result = "Not enough space on $movetodrive"  
          $objColl += $obj  
          Continue  
       }  
         
       $result = Set-WmiInstance -ComputerName $srv -Class Win32_PageFileSetting -Arguments @{name="$targetfilename";InitialSize=$obj.OldInitSize;MaximumSize=$obj.OldMaxSize;}  
         
       # this is needed in case the WMI instance can't be created with all parameters in one go  
       if(!($result.name -eq $targetfilename)){  
          $result = Set-WmiInstance -ComputerName $srv -Class Win32_PageFileSetting -Arguments @{name="$targetfilename"}  
         
          if($result.name -eq $targetfilename){  
             $newPGFile = gwmi -ComputerName $srv -Query "Select * from Win32_PageFileSetting where name '$($targetfilename.replace("\","\\"))'"  
             $newPGFile.InitialSize = $obj.OldInitSize  
             $newPGFile.MaximumSize = $obj.OldMaxtSize  
             $FinalResult = $newPGFile.Put()  
          }  
            
          $checkoutPGFile = gwmi -ComputerName $srv -Query "Select * from Win32_PageFileSetting where name '$($targetfilename.replace("\","\\"))'"  
          if(!($checkoutPGFile -and ($checkoutPGFile.Maximumsize -eq $obj.OldMaxSize) -and ($checkoutPGFile.Maximumsize -eq $obj.OldMaxSize))){  
             $obj.Result = "Could not create new page file"  
             $objColl += $obj  
             Continue     
          }  
       }  
         
       if($result.name -eq $targetfilename){  
          $searchString = $obj.OldPageFile.replace("\","\\")  
          $oldPageFileObject = gwmi -ComputerName $srv -Query "Select * from Win32_PageFileSetting where name = '$searchString'"  
          $oldPageFileObject.Delete()  
       }  
         
       if((gwmi -ComputerName $srv -Class Win32_PageFileSetting).name -eq $targetfilename){  
          $obj.Result = "SUCCESS - please reboot the host"  
       }  
       else{  
          $obj.Result = "Could not move page file"  
       }  
    }  
      
    $objColl += $obj  
 }  
   
 $objColl  
   



t