Here’s a quick post — it should be a nice short one by my standards. This weekend I decided to upgrade a couple of my Ubuntu servers from 18.04 to 20.04. I ran into a bit of a problem with a really tiny cheap VPS that I keep mainly for playing around. It only has 256 MB of RAM and 5 GB of storage. It was an interesting challenge finding enough free disk space to complete the upgrade process to begin with, but that ended up being the easy part.
After I was all done upgrading, I rebooted, and the server never came back up. I figured out how to gain access to the console, rebooted the VPS again, and watched the console as it froze with a kernel panic. I couldn’t scroll up too far to see the very beginning of the panic, but the last line gave me a big clue:
---[ end Kernel panic - not syncing: System is deadlocked on memory ]---
Looking at the stack trace above that line, I could see that the kernel was inside of populate_rootfs and appeared to be in the middle of unzipping the initramfs file into a RAM disk. Seems like the kernel ran out of free memory during this process. I did some Googling and found a few others who were seeing the same or a similar problem with low-RAM systems (1, 2, 3) but nobody had posted a solution.
I rebooted once again, but this time I interrupted GRUB and headed back into the older 4.15.0-144-generic kernel that had been left over from 18.04 instead of the new 5.4.0-74-generic kernel, and it worked fine. At first I convinced myself that I’d simply have to stick with the older kernel from 18.04, but I really wanted to get it working with the newer 20.04 kernel, so I played around some more. I noticed in /boot that the initramfs had gotten even bigger with the new 20.04 kernel:
-rw-r--r-- 1 root root 46854754 Jun 6 11:22 initrd.img-4.15.0-144-generic
-rw-r--r-- 1 root root 51028888 Jun 6 11:40 initrd.img-5.4.0-74-generic
It seemed kind of crazy that a minimal server really needed around 50 MB of stuff in a RAM disk just to pass the boot process onto the final root filesystem. It was also interesting because it looked like my 18.04 kernel’s initramfs may have been teetering on the edge of being too big as well. Anyway, I noticed that at the top of /etc/initramfs-tools/initramfs.conf there is an option to choose which modules to add to the initramfs. It defaults to MODULES=most, which adds most filesystem and all disk drivers. I changed it to MODULES=dep, which attempts to guess which ones are needed and only use those. Then I regenerated the 5.4 initramfs by running “update-initramfs -u” and checked it again:
-rw-r--r-- 1 root root 46854754 Jun 6 11:22 initrd.img-4.15.0-144-generic
-rw-r--r-- 1 root root 8811191 Jun 6 11:44 initrd.img-5.4.0-74-generic
I was amazed to discover that it had shrunk down to around 8 MB in size. I guess there were a lot of drivers in there that were not needed. Anyway, after making that change, I rebooted and everything works great now!
Obviously there are always tradeoffs. The downside is that if something changes about the storage setup of my VPS in the future, it might fail to boot due to not having the driver it needs. Also, if I image the server, the image likely won’t be able to directly boot a different computer or VM unless I regenerate the initramfs. But this solution works great for me and now my tiny VPS is happy running Ubuntu 20.04 with kernel 5.4.
no comments