Skip to main content

Migration from samba2.2+ldap to samba3+ldap

I am busy (while customers are away, getting unseasonably tanned) upgrading a linux cluster currently acting as file server to samba3 and ldap 2.2.
On the first day I was lucky:
  • the server won't see the new 140GB disks, only 80GB and for this I had to flash the raid controller (Adapted 2100s)
  • the server won't boot from cd and I had to reset bios settings to default
  • a DDS4 (boasting 20GB native capacity) drive is not able to backup a 15GB partition
One day completely lost and looks like my holidays are shrinking...

UPDATE: just to make sure that things go as wrong as possible somebody planned to rearrange the power supplies while I was plugging the new disks on the second system. Another hour wasted...

3 Jan 2005, Update: all users came back to their offices and I must say that I was a little worried. The system was not really ready in my opinion, but luckily all went well. Only three of them were locked out because of the password being lost in the migration.
All's well what ends well...

Comments

Popular posts from this blog

Mirth: recover space when mirthdb grows out of control

I was recently asked to recover a mirth instance whose embedded database had grown to fill all available space so this is just a note-to-self kind of post. Btw: the recovery, depending on db size and disk speed, is going to take long. The problem A 1.8 Mirth Connect instance was started, then forgotten (well neglected, actually). The user also forgot to setup pruning so the messages filled the embedded Derby database until it grew to fill all the available space on the disk. The SO is linux. The solution First of all: free some disk space so that the database can be started in embedded mode from the cli. You can also copy the whole mirth install to another server if you cannot free space. Depending on db size you will need a corresponding amount of space: in my case a 5GB db required around 2GB to start, process logs and then store the temp files during shrinking. Then open a shell as the user that mirth runs as (you're not running it as root, are you?) and cd in

From 0 to ZFS replication in 5m with syncoid

The ZFS filesystem has many features that once you try them you can never go back. One of the lesser known is probably the support for replicating a zfs filesystem by sending the changes over the network with zfs send/receive. Technically the filesystem changes don't even need to be sent over a network: you could as well dump them on a removable disk, then receive  from the same removable disk.

How to automatically import a ZFS pool built on top of iSCSI devices with systemd

When using ZFS on top of iSCSI devices one needs to deal with the fact that iSCSI devices usually appear late in the boot process. ZFS on the other hand is loaded early and the iSCSI devices are not present at the time ZFS scans available devices for pools to import. This means that not all ZFS pools might be imported after the system has completed boot, even if the underlying devices are present and functional. A quick and dirty solution would be to run  zpool import <poolname> after boot, either manually or from cron. A better, more elegant solution is instead to hook into systemd events and trigger zpool import as soon as the devices are created.