Monday 19 December 2011

How To Get Full IPv6 Net Sessions Addresses

Net Sessions is a tool I use frequently when working on client systems to find IP addresses of machines connected to servers.  However, with IPv6 addresses the net sessions command truncates the address rendering it useless.

Below is a powershell command I found to get the full IPv6 address:


gwmi win32_serversession | ft –Property ComputerName,UserName

Thursday 15 December 2011

Automated Notifications For Machines Not Checking In To WSUS

Recently had a problem with an anti-virus update preventing computers from checking in with the WSUS server. Not too big a deal except for the fact that we didn't actually notice until our client pointed out that all the machines were showing errors connecting to the server.

Looking into this, the email notifications in WSUS (which we had set up and were working) do not list machines that have not checked in for a long time so it was not at all obvious that there was a problem. Further more, there is no way within the WSUS administration console to set up email notifications for this.

After a bit of research, I managed to put together the following PowerShell script which others may find useful (be sure to change the variables at the top, in red, to appropriate values for your environment)



# find stale computers in WSUS
# based on code:
# http://www.sapien.com/forums/scriptinganswers/forum_posts.asp?TID=4306
# http://msdn.microsoft.com/en-us/library/ee958382(v=VS.85).aspx


$smtpserver = "myExchangeServer"
$sender = "
WSUS@myDomain.example"
$recipient = "
Support@myDomain.example"
$maxAge =
14

$smtp = new-object Net.Mail.SmtpClient($smtpserver)
$msg = new-object Net.Mail.MailMessage
$msg.From = $sender
$msg.To.Add($recipient)
$msg.Subject = "WSUS Machines Not Checking-In Report"
$msg.Body = "<p>WSUS Machines that have not checked-in in the last $maxAge days</p>"
$msg.Body += "<table>"

$lastValidContactDate = $(Get-Date).Adddays(-$maxAge)
[reflection.assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration") | out-null
$wsus = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer()
$computerScope = new-object Microsoft.UpdateServices.Administration.ComputerTargetScope

$computerScope.ToLastSyncTime = $lastValidContactDate
$wsus.GetComputerTargets($computerScope) | foreach {
$msg.Body += "<tr><td>" + $_.FullDomainName + "</td><td>" + $_.LastSyncTime + "</td></tr>"
}
$msg.Body += "</table>"
$msg.IsBodyHTML = $true
$smtp.Send($msg)

Thursday 27 October 2011

How To Automatically Connect To Network Shares On Login To A Mac - Part 2


Since writing my previous post on this subject I have found a much better way of doing this, and as a bonus it works as a far more generalised login script than just mapping drives.

Instead of using launchd this approach uses an OSX feature to automatically run applications at login and logout. I found the basic technique at MacEnterprise.org so please check out the original article describing how to to set up the base system and then see below for how to set up drive mappings

http://www.macenterprise.org/articles/runningitemsatlogin

An entry is added to the AutoLaunchedApplicationDictionary of the global loginwindow preference file. This calls an application launcher during logon for all users which then executes all scripts in a given directory.
On logout, the loginwindow LogoutHook is used to call a similar launcher to remove any drive mappings or other clean up that should be performed during logoff.

The launcher applications are just scripts that are bundled into two OS X “apps”, one for running the login scripts and one to run the logout ones. These applications and scripts are located in the following directories

/Library/Vuzzlevuzz/LoginLauncher.app
/Library/Vuzzlevuzz/LogoutLauncher.app
/Library/Vuzzlevuzz/LoginItems
- place scripts to be run at login here
/Library/Vuzzlevuzz/LogoutItems
- place scripts to be run at logout here

The two basic scripts that should be added to the relevant folders for connecting to network shares are:

/Library/Vuzzlevuzz/LoginItems/mapVolumes.sh
/usr/bin/osascript <<-EOT
try
mount volume “smb://servername/share”
end try
EOT


/Library/Vuzzlevuzz/LogoutItems/unmapVolumes.sh
/usr/bin/osascript <<-EOT
tell application “Finder” to eject “sharename”
EOT

DO NOT EDIT THESE SCRIPTS ON WINDOWS MACHINES!
Windows and Macs use different file formats, if these files are edited on a Windows machine they will not work – all editing or creation of new scripts must be done on a Mac.


These instructions assume that the relevant files (the Vuzzlevuzz directory and below) have been copied to the desktop of the machine you are installing on and that you are logged in as a user with administrative privileges.

Open a terminal (Finder... Utilities... Terminal) and enter the following commands (Note: each number represents one line to enter at the terminal, ignore any other line breaks)

1. sudo mkdir /Library/Vuzzlevuzz
2. sudo cp –r ~/Desktop/Vuzzlevuzz/* /Library/Vuzzlevuzz
3. sudo chmod a+x /Library/Vuzzlevuzz/LoginLauncher.app
4. sudo chmod a+x /Library/Vuzzlevuzz/LogoutLauncher.app
5. sudo chmod a+x /Library/Vuzzlevuzz/LoginItems/*
6. sudo chmod a+x /Library/Vuzzlevuzz/LogoutItems/*
7. sudo defaults write /Library/Preferences/loginwindow '{ AutoLaunchedApplicationDictionary = ({Hide = 1; Path = "/Library/Vuzzlevuzz/LoginLauncher.app"; });}'
8. sudo defaults write com.apple.loginwindow LogoutHook /Library/Vuzzlevuzz/LogoutLauncher.app


References:
http://www.macenterprise.org/articles/runningitemsatlogin

Wednesday 26 October 2011

Active Directory Login Problems with OSX 10.7.2 Lion and .local domains

Recently one of our clients had two Macs decide to drop off the domain - all of a sudden they could no longer log in to them with active directory accounts. Both machines are running OSX Lion 10.7.2 and the problem seemed to start after installing the latest updates from Apple.

Poking about on the net, this seems to be a known problem with no real solution but I managed to piece together a process to get this working.

The problem is related to the way that OSX uses the .local domain for Bonjour and uses multicast DNS queries to try to locate network resources instead of directly querying the DNS server. If you have your Active Directory domain set up as mydomain.local (which is Microsoft best practice and the default in Small Business Server) then conflicts arise. This has been an ongoing issue with using Macs in Windows domains for some time and the reliability of domain logins seems to go up and down with each new release.

My first thought was to just disable mDNS but it seems that this is not possible as OSX uses this for normal DNS queries as well as for Bonjour.

Googling reveals lots of potential solutions for different versions of OSX but none of these worked for me. Eventually by piecing different bits of information together I stumbled on a working configuration which I repeated successfully on two machines.

I'm not sure if all of this is strictly necessary - the machines in question I was working on remotely and the client needed them up and running as soon as possible so I was not able to isolate exactly which steps are required but there is nothing in here that will cause problems for small environments. For larger installations with multiple domain controllers this isn't really the best way of setting up machines but it will at least get them logging on to the network until Apple can come up with a proper fix.

Before you start, if you have tried increasing the mDNS timeout settings as recommended by lots of articles on the net, then change them back. Once the machine has been set up as below it works correctly with the default and I suspect that increasing the timeout would actually be counterproductive as we want the mDNS query to timeout quickly so that the machine fails over to normal DNS queries

Open a terminal and type

sudo nano /System/Library/SystemConfiguration/IPMonitor.bundle/Contents/Info.plist

Ensure the mdns timeout is set to 2

<key>mdns_timeout</key>
<integer>2</integer>

First off, ensure name resolution is set up correctly by adding mydomain.local and mydomain to the DNS Search Domains

System Preferences, Network, Ethernet Connection, Advanced button, DNS, Search Domains
Add

mydomain.local
mydomain

Ensure the naked domain resolves by adding a record for it in the machine's hosts file
Open a terminal and type

sudo nano /etc/hosts

And add the following line, substituting the correct values for your environment:

mydomain ipaddress_of_domain_controller

If the machine is already bound to the domain, then un-bind it before re-joining the domain as below:

System Preferences, Users & Groups, click Login options,
if already bound to a domain, click Edit and then Unbind. Else click Join

Do not enter the details in the first window that pops up but instead click on Open Directory Utility

Go to Services, Active Directory, and click the pencil icon to edit
Enter the domain name,

mydomain.local

Click Show Advanced Options and click on the Administrative tab
Tick Use Preferred Server and enter the IP address (not the name) of the domain controller
Untick Allow Authentication from any domain in the forest
Click Bind, enter credentials when prompted and wait for 10 minutes...

Once this finishes, stay in the Directory Utility and go to the Search Policy tab
Remove /Active Directory/MYDOMAIN/All Domains
Add /Active Directory/MYDOMAIN/mydomain.local
Move this entry up to immediately under /Local/Default
Your Search Policy list should now look like this:

/Local/Default
/Active Directory/MYDOMAIN/mydomain.local
/Active Directory/MYDOMAIN

Reboot and you can now login with Active Directory accounts

Friday 30 September 2011

Sending Email through Exchange 2010 EWS with VBScript

At work we need to monitor the backups for all of our clients.  To make life easier a colleague of mine put some scripts together to read all the backup emails from the various servers and put them into a report, to easily show which have failed.

This worked great until a couple of weeks ago when we migrated from Exchange 2003 to 2010 and it all stopped working.

Of course, the guy who put this all together has since left the company so it was down to me to work out what was going on and put it right.

The mailbox monitoring script was written in well commented VBScript so it was fairly easy to follow.  I turned on some 'debugging' (wscript.echo !) and found that the script was crashing when trying to parse long email subject lines (it breaks the subject line into fields to find the client name, job name, success or failure, etc.) because there were not as many fields as it was expecting.

Now there is no native way to read a mailbox in VBScript so this script uses a 3rd party ActiveX control called FreePop to connect to the mailbox and download the messages.

A little experimentation found that the Exchange server was line-wrapping the subject at 72 characters which effectively truncated it as far as FreePop was concerned, causing the problem.  At first I looked for some way to change this line-wrapping, but it seems to be hard-coded and immutable, so I then looked about for an alternative way to access a mailbox from VBScript.

Of course, I could have used a different language (Powershell sprung to mind) but then I would have had to re-write the whole thing.  Far preferable would be to simply replace the section of code that pulls the emails from the mailbox.

After a bit of Googling I discovered that Exchange 2010 exposes a web service (imaginatively called Exchange Web Services or EWS) that can be used to do this and much more.  The only problem is that all the examples I could find were written in other languages than VBScript.

Still, not one to be deterred, I fiddled around until I got some test code successfully reading a mailbox inbox.
This code manually creates a SOAP xml message, submits it to the web service and parses the response - don't worry if you have no idea of what these things are, copy and paste works a treat :)

Watch out for those pesky line-breaks...

' set some variables
servername = "myExchangeServer"
username = "DOMAIN\User"
password = "UserPassword"

' enumerate items in inbox
smsoapmessage  = "<?xml version='1.0' encoding='utf-8'?>" _
& "<soap:Envelope xmlns:soap='http://schemas.xmlsoap.org/soap/envelope/' xmlns:t='http://schemas.microsoft.com/exchange/services/2006/types'>" _
& "  <soap:Body>"_
& "    <FindItem xmlns='http://schemas.microsoft.com/exchange/services/2006/messages' xmlns:t='http://schemas.microsoft.com/exchange/services/2006/types' Traversal='Shallow'>" _
& "      <ItemShape>" _
& "        <t:BaseShape>AllProperties</t:BaseShape>" _
& "      </ItemShape>" _
& "      <ParentFolderIds>" _
& "        <t:DistinguishedFolderId Id='inbox'/>" _
& "      </ParentFolderIds>" _
& "    </FindItem>" _
& "  </soap:Body>" _
& "</soap:Envelope>"

set req = createobject("microsoft.xmlhttp")
req.Open "post", "https://" & servername & "/ews/Exchange.asmx", False,username,password 
req.setRequestHeader "Content-Type", "text/xml"
req.setRequestHeader "translate", "F"
req.send smSoapMessage

if req.status = 200 then
  set oXMLDoc = req.responseXML
  set ResponseCode = oXMLDoc.getElementsByTagName("m:ResponseCode")
  if ResponseCode(0).childNodes(0).text = "NoError" then
    Set oXMLSubject = oXMLDoc.getElementsByTagName("t:Subject")
      For i = 0 To (oXMLSubject.length -1)
        set oSubject = oXMLSubject.nextNode
        WScript.echo i & vbTab & oSubject.text
      next
  else
    WScript.echo "Exchange server returned error: " & ResponseCode(0).childNodes(0).text
  end if
else
  WScript.echo "Failed to connect to Exchange server, error code: " & req.status
end if



When I first ran this I was getting a 200 for the request object which is a success but the Exchange server was returning "ErrorServerBusy".  This threw me for a while, but I then discovered another new feature in Exchange 2010 called Throttling Policies.

Throttling policies are designed to prevent malicious or unintentional large requests from affecting the performance of the server and are enabled by default in Exchange 2010

MSDN blog on Understanding Throttling Policies


Using the Exchange shell to run the get-ThrottlingPolicy shows you all the default settings.  In my case the problem was caused by the default EWSFindCountLimit being 1000 - the inbox I was querying had over 2000 items in it.

I could have changed the EWSFindCountLimit setting in either the default throttling policy or created a custom one just for this user, but as the only reason there was so many emails in the inbox was because the script had not been running for a while and this normally clears them out when it finishes, I just deleted all the old emails to get below a thousand.
Everything then worked beautifully.

Deleting the emails from the inbox after they have been processed is achieved by calling the EmptyFolder method with another SOAP request.  Here is the xml for the request, I'll leave the actual VBScript to do this as an exercise for the reader :)

<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:m="http://schemas.microsoft.com/exchange/services/2006/messages" xmlns:t="http://schemas.microsoft.com/exchange/services/2006/types" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <soap:Header>
    <t:RequestServerVersion Version="Exchange2010_SP1" />
  </soap:Header>
  <soap:Body>
    <m:EmptyFolder DeleteType="HardDelete" DeleteSubFolders="false">
      <m:FolderIds>
        <t:DistinguishedFolderId Id='inbox'/>
      </m:FolderIds>
    </m:EmptyFolder>
  </soap:Body>
</soap:Envelope>

Saturday 10 September 2011

Automated Disk Usage Reports With Powershell

Just a quick one today, a simple script to produce a nicely formatted email showing disk space usage.

When I set out to do this, I assumed it would be easy and expected to find hundreds of examples, but couldn't find any that did what I wanted.

This script uses Powershell and du.exe from Sysinternals (now part of Microsoft).


# uses du.exe from sysinternals
# http://technet.microsoft.com/en-us/sysinternals/bb896651.aspx

$dir="C:\users"
$smtpserver = "mail.example.local"
$sender = "sender@example.local"
$server = "file.example.local"
$recipient = "recipient@example.local"

$du=C:\utilities\du.exe -accepteula -l 1 $dir | sort -desc

$smtp = new-object Net.Mail.SmtpClient($smtpserver)
$msg = new-object Net.Mail.MailMessage

$msg.From = $sender
$msg.To.Add($recipient)
$msg.Subject = "$server Disk Usage Report"
$msg.Body = "<p>$server Disk Usage Report in KB</p>"

$msg.Body += "<table>"
$du[4 .. $du.length] | forEach-Object {
$fields = $_.split(" ",[StringSplitOptions]::RemoveEmptyEntries)
$msg.Body += "<tr><td>$($fields[0]) </td><td>$($fields[1 .. $fields.length]) <td></tr>"
}
$msg.Body += "</table>"

$msg.IsBodyHTML = $true
$smtp.Send($msg)

Friday 9 September 2011

Fully Automated Backups of QNAP-TS410

One of my clients has a couple of sites with a QNAP NAS device at each that they use as file servers.  These QNAPs have a built in "backup" facility that you can use to automate copying the data from one device to another.

Why did I put backup in quotes?  Because, this is not really backup, it is replication which is not the same thing at all.

To illustrate the difference, imagine you have a spreadsheet say, with important financial data in.  One day, one of your staff deletes a large chunk of the contents of this spreadsheet and saves the file.  A couple of days later you realise this and go to your backup to get an older version of the file from before the data was removed...

If all you have is a replica of the current files then you are properly stuffed.

Now this didn't actually happen to my client luckily but when I realised what the situation was (I had inherited the environment...) I knew I had to do something about it.

After much research I found there was no simple way to do this, but thanks to one article I found, and a lot of hard work on my part, I managed to piece together a full solution for automated daily backups (with reporting) using rsnapshot which I thought I would share in the hopes that it might help someone else out.

Many thanks go to http://www.sysnet.co.il/ where I found most of the ground work for this.  This article builds upon that one, detailing how to fix a few problems I encountered with these specific QNAPs, and also extends it to produce automated reporting on the success of the backups

http://www.sysnet.co.il/files/Daily%20incremental%20backups%20with%20rsnapshot.pdf

Installation of Packages

Log on to the web interface of the QNAP

  • Go to Applications… QPKG Plugins
  • Click GET QPKG
    • If this fails – “Sorry, QPKG information is not available”
      • Check DNS servers have been configured correctly under System Administration… Network
      • If still not working, try downloading the package from somewhere else and copying to the NAS as this is very internet speed dependent and prone to timing out.
  • Select the Optware IPKG and download the appropriate package
    • For the QNAP-TS410 devices this is the ARM (x10/x12/x19 series) for TS-419P
    • IPKG just allows to install a much wider range of packages onto the QNAP
  • Extract the zip file to get  a .qpkg file then click on INSTALLATION
  • Browse to the qpkg file and click INSTALL
    • If this fails then use scp to copy the qpkg file to the NAS and then install from the command line using 
    • sh Optware0.99.163_arm-x19.qpkg
      and this will show you more information about why it has failed
    • If you get an error saying:
      “Optware 0.99.163 installation failed. /share/MD0_DATA/optware existed. Please remove it first.”
      Then type the following command and try the install again:
      rm –rf /share/MD0_DATA/.qpkg/Optware
  • Once installation is successful, go to QPKG INSTALLED
  • Click on Optware and click ENABLE
  • Use putty (or whatever) to connect via SSH to the QNAP
  • Run the following commands to install the necessary packages
    • /opt/bin/ipkg update            (there will be some errors here but ignore them)
      /opt/bin/ipkg install rsnapshot
      /opt/bin/ipkg install nano

Rsnapshot Configuration
Configuration of rsnapshot is managed by editing the file /opt/etc/rsnapshot.conf
Initially this needs to be set up with the location on the file system to store the snapshots and a list of folders to be backed up.
Note that for remote backups the data is pulled from the server to be backed up NOT pushed to the backup server as you might expect.
To set the snapshot folder location, edit /opt/etc/rsnapshot.conf

nano /opt/etc/rsnapshot.conf

and change:

snapshot_root  /opt/var/rsnapshot
to
snapshot_root  /share/MD0_DATA/rsnapshot

Exclude any rsnapshot backups and other data you do not want backed up by adding the following:

exclude        /share/MD0_DATA/rsnapshot
exclude        /share/MD0_DATA/Backup
exclude        /share/MD0_DATA/Qdownload
exclude        /share/MD0_DATA/Qmultimedia
exclude        /share/MD0_DATA/Qrecordings
exclude        /share/MD0_DATA/Qusb
exclude        /share/MD0_DATA/Qweb

Change the intervals to keep however many backups you want (rsnapshot is very efficient at not duplicating data so I configure it for 3650 daily backups, or ten years’ worth) by setting the BACKUP INTERVALS section as follows:

#interval      hourly     6
interval       daily      3650
#interval      weekly     4
#interval      monthly    12

Comment out the default backup sets by putting a hash in front of these lines in the BACKUP POINTS / SCRIPTS section

#backup        /etc/      localhost/
#backup        /opt/etc/  localhost/

Add the following line to carry out the actual backup:

backup admin@<site1_ip_address>:/share/MD0_DATA/    <site1>

NOTE:  The fields in this file must be separated by tabs, not spaces

After making any changes to the rsnapshot.conf, run the following command to make sure that everything is OK

rsnapshot configtest

Setting Up SSH Keys
In order for the scheduled backups to be able to access the remote NAS device we need to set up SSH keys – this will allow the backup server to present a certificate to the remote server for authentication.
First copy the key from the backup server to the source server:

ssh admin@<site1_ip_address> "echo `cat ~/.ssh/id_rsa.pub` >> ~/.ssh/authorized_keys"

NOTE: the above command is all one line

Accept the authenticity of the host if prompted and then enter the admin password when prompted
Next set the permissions on the configuration file and add this to the autorun to ensure it is preserved following a reboot.
Login to the source NAS (Site 1) and enter the following commands:

nano /tmp/config/autorun.sh

Add these three lines to the file and save it:

mount –t ext2 /dev/mtdblock5 /tmp/config
chown admin.administrators /mnt/HDA_ROOT/.config
umount /tmp/config

Scheduling Rsnapshot
Cron is used to schedule the actual snapshots.  To configure this edit /etc/config/crontab so that it has the following rsnapshot entries:

0 23 * * * /opt/bin/rsnapshot daily

After making any changes to this file, make sure to run the following command to ensure the changes are preserved if the NAS is rebooted

crontab /etc/config/crontab

 

Email Configuration
To set up the NAS for email, use the QNAP web administration page and go to System Administration… Notification… and enter the settings for your mail server: 

You can send a test message from the ALERT NOTIFICATION tab to check this is working.

Setting Up Monitoring
The rsnapshot log file is stored here: /opt/var/log/rsnapshot

For automated alerting, create the following script, and then schedule it through crontab (make sure this script is stored under /share/MD0_DATA/… as files stored in /root for example, get removed on system reboot.)

/share/MD0_DATA/rsnapshot/check_backup.sh

#!/bin/bash
recipient=recipient@example.local
sender=sender@example.local
subject="Site 1 Backup Report"
log=/opt/var/log/rsnapshot
tmpfile=/opt/var/log/rsnapshot.tmp

function log {
  /bin/echo "[`date +%d/%b/%Y:%H:%M:%S`] $*\n" >> $log
}

if grep -q "rm -f /opt/var/run/rsnapshot.pid" $log
then
  if grep -q ERROR: $log
    then
      status=FAILED
    else
      status=SUCCESSFUL
  fi
else
  status="FAILED, INCOMPLETE"
# The following lines safely kill the backup to stop it
# impacting performance during the day
# comment them out if you do not want this to happen
  log "Backup window exceeded, terminating processes"
 
snapPID=`ps -ef | grep "/opt/bin/perl -w /opt/bin/rsnapshot daily" | awk '{print $1}'`

  rsyncPID=`ps -ef | grep "/opt/bin/rsync -a --delete --numeric-ids" | awk '{print $1}'`
  log "rsnapshot pid = $snapPID"
  log "rsync pid = $rsyncPID"
  log “Killing rsnapshot process”
  kill $snapPID
  log "Sleeping for 5 minutes to wait for rsnapshot to exit"
  sleep 300
  log “Killing rsync processes”
  kill $rsyncPID
fi

echo To: $recipient > $tmpfile
echo From: $sender >> $tmpfile
echo Subject: $subject - BACKUP $status >> $tmpfile
echo "Disk space used by backups: $(rsnapshot du | grep total)" >> $tmpfile
echo "Most recent backup sizes: " >> $tmpfile
echo "$(du -sh /share/MD0_DATA/rsnapshot/daily.0/*)" >> $tmpfile
echo "Backup Log:" >> $tmpfile

cat $tmpfile $log | /mnt/ext/usr/sbin/ssmtp -v $recipient

rm $log
rm $tmpfile


NOTE: lines in bold have been wrapped in this document – they must be all on one line in the actual script.

Make the script executable by typing the following command:

chmod +x /share/MD0_DATA/rsnapshot/check_backup.sh

Then schedule the script to run each morning by adding the following lines to the crontab:

0 7 * * * /share/MD0_DATA/rsnapshot/check_backup.sh

Restoring Data
Use the following command to show all the backups and when they completed:

[~] # ls -Ahlt  /share/MD0_DATA/rsnapshot/
drwxr-xr-x    3 admin    administ     4.0k Jul 10 23:21 daily.0/
drwxr-xr-x    3 admin    administ     4.0k Jul  9 23:40 daily.1/
drwxr-xr-x    3 admin    administ     4.0k Jul  8 23:41 daily.2/
drwxr-xr-x    3 admin    administ     4.0k Jul  7 23:37 daily.3/
drwxr-xr-x    3 admin    administ     4.0k Jul  6 23:29 daily.4/
drwxr-xr-x    3 admin    administ     4.0k Jul  5 23:27 daily.5/
drwxr-xr-x    3 admin    administ     4.0k Jul  5 04:59 daily.6/

In each of the daily folders there is a subdirectory for each server it has backed up and below that is a full replica of that server’s data.

Restoring files is simply done using the scp command on the backup NAS which has the following syntax:

scp <source_file> <site1_ip_address>:<destination>

If the source file/path has spaces in then enclose the full path in quotes.

So, to restore the file expenses.xls in the Contractors share from the 7th July, use this command:

scp “/share/MD0_DATA/rsnapshot/daily.2/<site1>/share/MD0_DATA/Contractors/expenses.xls” <site1_ip_address>:/share/MD0_DATA/Contractors

NOTE:  The above command is all one line.

Dealing With Overrunning Backups
If backups over run into the day this could affect performance of the NAS and internet access.  This is particularly likely when the system is first setup as there may be a lot of data to transfer.  Once the initial transfer has taken place only changes are sent so the backup is much quicker.

The check_backup.sh script automatically detects this and stops the backup running, but if you need to do it manually at another time then perform the following steps:

To check if a backup is still running, SSH to the NAS and type the following commands:

[~] # ps aux | grep rs
31897 admin   3256 S   /opt/bin/perl -w /opt/bin/rsnapshot daily
32219 admin   2140 S   /opt/bin/rsync -a --delete --numeric-ids
32220 admin   4748 S   /opt/bin/ssh -l admin <site1_ip_address>
32301 admin   1932 S   /opt/bin/rsync -a --delete --numeric-ids

If this shows any rsync or rsnapshot processes then the backup is still running.  To stop the backup kill the processes in this order:

/opt/bin/perl –w /opt/bin/rsnapshot daily
/opt/bin/rsync ….

To do this, use the command “kill” followed by the PID (process identifier) – this is the first number in the output of the ps command.  So for the example above, to stop the backup, issue these commands.

kill 31897
ps aux | grep rs

Wait for the rsnapshot daily process to exit – this may take a while so periodically run the ps command again to check if it is still there.  Once it has finished, then kill the first rsync process (this will stop the others as well)

kill 32219

NOTE:  Use the correct PIDs, don’t just copy the ones here !!!

If you kill the rsync process first then rsnapshot notices this and rolls back the snapshot that was in progress deleting all the data.  Any previous days’ data is still there but if the backup was overrunning then you do not want to have to start copying the new data all over again – by doing it this way round the backup can pick up where it left off next time.
Rsnapshot      http://www.rsnapshot.org/

Wednesday 18 May 2011

Unable to Spell Check Signatures In Outlook

A couple of our clients have asked how to get Outlook to spell check their signatures when sending emails recently.  At first I thought this was entirely unnecessary as you should just check the spelling when you create the signature and it shouldn't change after that.  But one of these clients has quite a fancy signature consisting of a sidebar and the email message gets typed along side that - as this then falls within the signature block it does not get spell checked.

We did quite a bit of research on this and everything we found said that it is not possible to enable spell checking in signatures in any version of Outlook.

However, I found a workaround.  It's really quite simple and I wouldn't normally post something like this here, but as everyone else is saying it is impossible I thought I really should point out how to do it.

Use stationery instead.

That's it, just move the signature html file from the signature folder under the user profile and place it in the stationery folder.  Then disable the signature and go into Stationery and set this as your theme instead.

And there you go, the "signature" is now spell checked correctly.

Monday 14 March 2011

Sharing files between users on Linux

Some time ago, I got rid of my home 'server' (a Buffalo LinkStation hacked to run linux) when the hard drive died.  This managed file shares, email, and backing up critical data to offsite storage (rsync.net - a no frills remote filesystem that I highly recommend for techies)

The email moved into the cloud (first a VPS, now Google Apps for Domains) but I didn't want to use all my bandwidth and limited offsite storage for unimportant things like music.  No problem I thought, I'll just set up a folder on each machine that everyone has access to and regularly sync between them.

Now all my machines run linux (Ubuntu) and I thought this would be easy but it turns out there is no widely accepted way of sharing files between users on linux.  Checking out the Ubuntu forums showed quite a few requests for some means of doing this but no real solution.  If you simply set permissions on a given folder (like you would in Windows, say), it won't work the way you expect as any new files created will have their permissions set based on the user that created the file rather than inheriting permissions from their parent folder.

File ACLs (and permissions generally) are the only things I've found that I prefer in Windows over Linux...

However, after some research I discovered the 'Sticky Bit' (or more specifically, SetGID).  This does quite a few things, depending on what context it is used in, but for our purposes we can use it to ensure that new files do inherit permissions from their parent

First step to set this up is to create a group to use for controlling access to this shared folder, in this example I have called the group shared-access, but obviously, call it whatever you want.

sudo addgroup shared-access

Add your user accounts to this group

sudo usermod -aG shared-access <username>

Set permissions on the shared folders (I am using a folder call shared that I have created under /mnt)

sudo chgrp -R shared-access /mnt/shared/
sudo chmod -R g+sw /mnt/shared


This gives the group write access and forces all files to inherit the group's permissions

Friday 4 March 2011

How To Automatically Connect To Network Shares On Login To A Mac

EDIT: Please see my new article for a better way to do this:
http://www.vuzzlevuzz.org/2011/10/how-to-automatically-connect-to-network.html

Been getting a bit hands on with Macs lately...

One of my clients employs quite a few freelancers on short term bases and they, understandably, need access to network shares.  Now on windows it's easy to configure this in a login script but there does not seem to be anyway to do this for Mac clients (on a Windows Active Directory domain at least, Open Directory can do it I understand).

At first research seemed to indicate that using a login hook to run a shell script to mount the share was the way to go, but when I tried it I found that the drive gets mapped as root rather than the logged in user - not what we want at all.

The solution I finally hit upon was to use a launchd agent.

Launchd is a system for running various things when certain events occur and is the Mac replacement for the common Unix startup scripts, rc.d, init.d, etc.  On cursory inspection it seems quite flexible in what it can do, but the bit that I'm interested in allows a script to be run whenever any user logs in - this lets me mount the network share at login even for a user that has never logged into the machine, and with the correct credentials to boot.

In order to make this easy to modify at a later date I set up a shell script to be run by launchd that, instead of mounting the shares directly, instead mounts the Windows server's NETLOGON share and executes a Mac specific logon script which then mounts the end user shares.  This allows me to manage what shares are mounted centrally, without having to modify every machine if I want to change something, and is analogous to the windows logon script and so easy for other engineers to understand and support.


To set this up on a Mac you need 3 files:

org.vuzzlevuzz.mapfolders.plist
 1 <?xml version="1.0" encoding="UTF-8"?>
 2 <!DOCTYPE plist PUBLIC -//Apple Computer//DTD PLIST 1.0//ENhttp://www.apple.com/DTDs/PropertyList-1.0.dtd>
 3 <plist version="1.0">
 4 <dict>
 5 <key>Label</key>
 6 <string>org.vuzzlevuzz.mapfolders</string>
 7 <key>Program</key>
 8 <string>/Library/Scripts/mapfolders.sh</string>
 9 <key>RunAtLoad</key>
10 <true/>
11 </dict>
12 </plist>
mapfolders.sh
 1 #!/bin/bash
 2 mkdir /Volumes/NETLOGON
 3 /sbin/mount -t smbfs //
servername/NETLOGON /Volumes/NETLOGON
 4 /Volumes/NETLOGON/OSXLogon.sh
 5 /sbin/umount /Volumes/NETLOGON
OSXLogon.sh
 1 /bin/mkdir /Volumes/sharename
 2 /sbin/mount -t smbfs //servername/sharename /Volumes/sharename
NOTE:  The line numbers are just to make it clear when lines have been wrapped

Copy org.vuzzlevuzz.mapfolders.plist to /Library/LaunchAgents and make it executable:
sudo chmod +x /Library/LaunchAgents/org.vuzzlevuzz.mapfolders.plist 
Copy mapfolders.sh to /Library/Scripts
Configure OSXLogon.sh as appropriate for your environment and place in the server NETLOGON share


There are a couple of caveats with this - I'm working to resolve them but at the moment they are not important for the place I am using this.  Will post a follow up if I get them sorted out, but for now be aware of the following:

  • The drives do not unmount when the user logs off
  • If another user logs in they will not have their drives mapped
  • If one user restarts the machine and then someone else logs in, it will work correctly


Mac OS X Reference Library: Creating launchd Daemons and Agents

Problem saving Office 2011 files to Windows 2008 shares

One of my clients just had a problem with opening Excel files on a network share.  When double clicking on a file they get the following error:
'<filename>' could not be found. Check the spelling of the file name, and verify that the file location is correct. If you are trying to open the file from your list of most recently used files on the File menu, make sure that the file has not been renamed, moved, or deleted.
Problem machine Mac OSX running Office 2011
2 windows file servers, one running Windows 2003, the other 2008

Office files can be opened on the Windows 2003 shares but not on the Windows 2008 ones
All other files can be opened fine on the Windows 2008 server

So, I went to another Mac, running Office 2004 and it all works fine.  I then looked at a third Mac running Office 2011 - this can open office files from both servers fine too.

I finally tracked the problem down to the way the shares had been mounted - as soon as I mounted the Windows 2008 share via AFP rather than SMB it all worked.

Note that the Windows 2003 shares work fine with Office 2011 mapped via SMB and the Windows 2008 shares work fine via SMB when using Office 2004,


So the problem was specifc to Office 2011 connecting to a Windows 2008 share via SMB


Connect to the share via AFP and you are good to go!