Debian Installer is an amazing piece of software, very extensible, and hackable. In addition to it's normal uses, it gives you a pretty nice environment running in a ramdisk that is easy to boot from the network, CD/DVD, usb drive, etc. This environment is really handy in a few different scenarios
A lot of what d-i does is very useful in getting a ramdisk booted and setup properly, it sets up language settings, configures the network and proxy servers, etc. When booting d-i in normal "install" mode you can follow the menus up to the point where the disk partitioning starts, without causing any writes to disks in the machine. Once you are at this point you have a pretty nice environment setup and can then start using the shell for additional hacking.>
While there are official proper ways to extend d-i with your own udebs and interfaces for adding menu entries, this page focuses on "hacks" that you can quickly do with any existing d-i. See the d-i wiki page for less hackish stuff.
Here are some cool things you can do.
d-i includes wget and you can use that to pull files onto the machine. After booting d-i, you can use the ui to get to the point where the network is configured, then get a shell (either from the main d-i menu, or a virtual console) and wget whatever you need. The files you pull will reside in the ramdisk, so your limited by the size of that, use df(1) to determine how much room you have. Also one thing to remember is that executables retrieved with wget are just copies, and will need to be chmod'd to be executable.
On really cool feature of d-i is that it includes a web server! Really it's just a simple shell script that uses netcat to speak some basic HTTP. To use it, after booting d-i proceed through the configure the network section, then go to the main menu and select the "Save debug logs" and then "web" options. You'll receive a notice that webserver is running and it's address, and you can point your web browser at the machine and get some useful information. If you want to make another file available, drop to a shell and put the file (or command output, etc) in /var/log/ and then it will show up. If you want to see how th web server is implemented, look at the /usr/bin/httpd shell script.
Using the wget method described above you can pull additional programs on to the system. The d-i environment is pretty limited and only provides a few system libraries, so you might need grab some libraries as well. Here is a set of typical steps.
$ ldd /usr/bin/sl libncurses.so.5 => /lib/libncurses.so.5 (0x00002ad572d2c000) libc.so.6 => /lib/libc.so.6 (0x00002ad572e87000) libdl.so.2 => /lib/libdl.so.2 (0x00002ad5730c4000) /lib64/ld-linux-x86-64.so.2 (0x00002ad572c14000)d-i provides all of those except for libncurses
# sl sl: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
Once I've figured out all the things needed to get something working, I keep a list suitable for use with tar so if I need to do it again I can generate a current tarball of everything, wget it to the d-i system, and untar it.
If the additional program you want is available as a udeb, you can use the anna-install program to install it. There are only a few things available as udebs that aren't already loaded in d-i by default, but there are a couple useful things.
You can use udpkg to install normal debian packages and it mostly works. Depending on which d-i image you used, you might even have a partial archive full of debs available that you can refer to directly like
udpkg -i /cdrom/pool/main/s/sl*.deb
If you have a root disk mounted, sometimes it's handy to be able to utilize the full install for things. d-i provides the chroot command, so you can run things in the system root that way, or you can just take advantage of libraries and binaries with something like
On i386 and amd64 at least, d-i runs on VC #1, but can also run shells on VC #2 and #3. By selecting the "Execute a shell" option in the d-i interface on VC #1, you have a total of 3 shells that you can run things in. If you need more shells than that you can add addition virtual consoles. To do that run something like
echo "tty5::askfirst:-/bin/sh" >> /etc/inittab kill -HUP 1
Now we have a way to get files on and off the system, add additional programs, take advantage of the system root, and do multiple things at the same time.
Before I recycle/reuse hard drives, I like to wipe the existing data off of them so I can be sure not to lose any private data/passwords/etc. Wiping the data from a disk requires that you are not booted from the drive at the time, so this is good use for d-i. The shred(1) from the coreutils package doesn't need any shared libraries beyond what's provided by d-i. Use the wget method to get it and then run something like
# shred -u -v -n 10 /dev/sda
You can run several of these processes in parallel with no problems (using the "multiple" technique described above), I often set up a machine to clean stacks of disks in this way before I send them to the computer recycling/reuse center. Read the shred(1) manpage for more info. You might also consider using the wipe(1) command from the wipe package, it also has no additional library dependencies. It also has a more entertaining man page :)
You can test hard drives using the badblocks(8) command. Where shred is about making sure the data on the drive is overwritten, badblocks is for testing that the blocks are working correctly. This is a good idea to do on new drives to ensure they are working ok during their RMA or warranty period, or also any time you are about to redeploy a used drive in a new purpose. badblocks can be used in several different modes: read-only (default), non-destructive read/write test where the block contents are saved before and then restored after the write test, and a destructive read/write test. badblocks is already part of d-i, so just get a shell and runread-only
# badblocks -s -v -b 4096 -c 10240 /dev/sdanon-destructive read/write
# badblocks -s -v -n -b 4096 -c 10240 /dev/sdadestructive read/write
# badblocks -s -v -w -b 4096 -c 10240 /dev/sda
In the above examples -s is status, -v is verbose, -b is the block size which we increase to 4k from the 1k default, and -c is the count of blocks to test at a time which we increase a lot from the default of 64 in order to speed things up since modern systems have plenty of RAM. If you wanted to test the disk more than just once or as an extended stress test, you can add a -p # option to specify a number of passes. Also note that you can run multiple badblocks at the same time, which is a particularly good way of exercising the system. NOTE: The destructive write test is also results in a clean the drive like the above shred example.
For drives that support S.M.A.R.T. you can run the drive's smart tests using the smartctl utility. Use the wget method to grab the following from a full system of the same architecture
/usr/sbin/smartctl /usr/lib/libstdc++.so.6.0.10 (or whatever) /usr/lib/libstdc++.so.6 -> /usr/lib/libstdc++.6.0.10 symlink /lib/libgcc_s.so.1Query the drive info (for a sata drive in these examples, needs
# smartctl -a -d ata /dev/sdaQuery the drive capabilities
# smartctl -c -d ata /dev/sdaCheck the health of the drive
# smartctl -H -d ata /dev/sdaRead the error log
# smartctl -l error -d ata /dev/sdaRead the self test log
# smartctl -l selftest -d ata /dev/sdaRun the full offline test (which you can query the state of using the above)
# smartctl -t offline -d ata /dev/sdaRun the long test
# smartctl -t long -d ata /dev/sda
When testing a new drive you probably want to do something like
-ato read about the drive and confirm its the drive you think it is :)
-cto read what capabilites the drive has
-Hto confirm the health of the disk is OK
-l errorto confirm there are no errors.
-l errorand health with
-Hto confirm that things are OK
Probably in addition to looking for really bad failures, for these tests to be useful it might be good to record the drive counters before and after the tests to see what changed.
Some systems (HP ProLiant, and Dell server for example) have Linux utilities for upgrading firmware. You can boot d-i, wget the update utility, and update the firmware on the system before installing. This might be particularly useful if the firmware upgrade is required in order to enable a piece of hardware that's needed for the install, when it would not be possible to install the system first and then upgrade the firmware.
This method would also probably work for any diagnostic utilities that run under linux.
If you've played with the built in webserver, you know that makes some information available about what d-i found on the system, including lspci output. This can be pretty useful when installing, especially if d-i fails to install on newer hardware. You can often plug some of the lspci output into google and find others that are working on the same problem. But one problem with this is that the database lspci uses to name the devices is static and built into d-i at the time of release. This often means it won't have names for the newer hardware you are working with.
From another system, grab /usr/bin/update-pciids and its dependency /usr/bin/which and make them available via http. Use the wget method, install in /usr/bin, and chmod +x them. Then run update-pciids
# update-pciids Connecting to pciids.sourceforge.net[188.8.131.52]:80 pci.ids.new 100% |*****************************| 494 KB 00:00 ETA Done.
Then run something like
# lspci -nnv >/var/log/new-lspci
and retrieve the updated output via the webserver.
This is actually a feature of d-i, no tricks involved. Boot d-i in expert mode, proceed with the steps (which will include setting up the network) until you get to select additional d-i components to load. Select the "remote install via ssh" module, after it loads select that option and follow the instructions to set a password for the "installer" user. The system generates an ssh host key and starts ssh and then you can login remotely and run the install. You want use a normal 80x24 terminal window and not resize the window as that can disrupt the d-i interface.
I have used this feature when I was setting up a system for someone who was 3000 miles away and I wanted to let them do the install so they could choose partition details, set usernames, passwords, crypto passphrases, and other sensitive information. Very handy.
$ tar zcvf rsync-ssh.tar.gz `cat rsync-ssh-list.txt`Make that tarball available via http, like in a public_html directory for example. Now drop to a shell and use wget to get the tarball on the system, and untar it in the root of the ramdisk. Test and make sure ssh and rsync run properly.
# anna-install openssh-client-udebThen you need to get rsync, libacl, libattr, and libpopt. Here's a list. Similar to above generate a tarball of the needed stuff and wget it to the system.
# rsync -avzWHS --dry-run -x --delete --numeric-ids -P root@orighost:/ .
Here is an explanation of the options:
Thanks to Colin Watson and dann frazier for contributing comments and tricks.Matt Taggart <firstname.lastname@example.org>