23 September 2013: Monday

LDAP:

  • Accounts:
    • Account Services notified me of a duplicate account.  Worked with them and the customer to merge the unactivated duplicate account with the previously existing account.
    • There were no issues flagged in the updates over the weekend.

Backups:

  • Backup Maintenance Day:
    • 9:00 AM Start delayed
      • COM’s MED23 had issues on Saturday and the backup of the M:\ disk is still running.  Eventually had to give up on it.  It had written 5036 GB and run for 2 days 3 hours, and I had to kill it at 12:55 PM.
      • EMail server incrementals also ran long – that finished up at 9:09 AM.
    • Tape library maintenance was a world of hurt!
      • The user accounts got wiped out and the root password was reset to the default.
      • The carousel controllers are so old that they cannot be updated.  Sadly, the vendor forgot or was not aware of this and attempted to upgrade one of them.  Two hours later we have it working but it is very temperamental.  New controller boards will be sent to be installed at our earliest convenience.
  • Tape Movement prep
    • Set up the spreadsheet and printed out the first 128*3 list of tapes so I can retrieve them tomorrow.
  • Routine Admin:

    • Defined nine new clients for DBAs and SAA.
  • New Hardware:
    • Testing of the NFS mount parameters over the weekend has shown that using {r,w}size=32768 provides the best throughput (with the OpenIndiana NFS server and the throughput-performance tuned profile on the RHEL6 NFS client).  I am able (with 128KB and 64KB write block sizes) to achieve 900 MB/s writing with dd through the NFS mount.  The nearly production systems are being updated today to use the determined size instead of the 131072 that they were using – should double their throughput.
    • Testing was performed with {r,w}size values of 131072, 32768, and 1048600.  The first because that is 128K (which is the blocksize of the ZFS pool that is being mounted), 32768 because that is the normal best, and 1048600 because that is the DataDomain recommendation for 10GbE connections to their devices from Linux systems.
      • The results make me wonder if the DataDomain might not work better with the smaller value as well, but since I am already running into the throughput limits of that device, I am not going to spend any more time worrying about it.

Tags:

Comments are closed.