December 23, 2019

Fedora Setup

Installation and configuration considerations for Fedora.

Last updated for Fedora 31.

This document covers some of the patterns I follow when installing Fedora systems. They may not be appropriate for all situations, and there are many considerations that are not covered here. After installing a system, I usually connect it to a configuration management system that implements a more secure and robust configuration.

Virtual Machines on QEMU/KVM

If using virt-manager, hit the checkbox to customize before installation.

  • Use the Q35 chipset
  • Use UEFI firmware -- Secure Boot is recommended but optional
  • Add a VirtIO SCSI controller, and attach disks to the SCSI bus (as opposed to VirtIO)
  • Network devices should be VirtIO
  • For the display, Spice is generally more responsive, but VNC is more compatible (and directly usable within Cockpit)
  • Video device should be QXL

When storing disk images as files:

  • Use the qcow2 storage format
  • Cache mode: none
  • IO mode: native
  • Discard mode: unmap
  • Detect zeroes: unmap

Firmware

Ensure that UEFI (as opposed to legacy BIOS) is the configured boot method, where supported.

If legacy BIOS is the only option available, then the storage configuration must be somewhat different. A few extra precautions in the storage setup will make it possible to migrate the installed system to UEFI later.

Installation

These instructions assume that a Fedora Server installation image is being used.

Network

Configure network settings to allow for Internet access following installation.

Accounts

The root account should be disabled. This is now the default. A separate administrator account should be created.

Packages

The default package set can be used if the system will be connected to a configuration management system.

Storage

The basic objective is to use LVM for as much of the storage layout as possible. The EFI System Partition (ESP) cannot be placed on LVM. The boot partition may be able to live on LVM, but then the volume group cannot be encrypted, and this would have a negative impact on survivability and troubleshooting.

Single Disk

For an installation on a single disk:

  • Choose the "Custom" storage configuration option.
  • Click on the "Click here to create them automatically" option.
  • Select the "/" partition, then hit "Modify..." under the Volume Group configuration.
  • Enter a reasonable volume group name. I often use "servername_vg00" when the server name is likely to be stable, or just "vg00" otherwise.
  • Change "Size policy" to "As large as possible" so that the volume group will fill out the disk.
  • Hit "Save".

See the "Additional Volumes" section, below.

Make any other needed customizations.

Software RAID

Bugs Ahead

Over the years, I have run into a variety of bugs in Fedora's installer when setting up customized storage layouts. In some cases, I have had to manually create all of the volumes before starting the installer to get the desired layout. In other cases, Fedora has failed to correctly set up the boot environment, requiring additional steps after installation to get the system booting properly. In most cases, the bugs simply cause the installer to crash.

For an installation on multiple disks in a software RAID configuration:

I previously recommended layering LVM on top of mdraid devices. This is no longer the case. LVM RAID is now up to the task and offers a more flexible option.

Red Hat's documentation for RAID logical volumes is here:

Choose the "Advanced Custom (Blivet-GUI)" storage configuration option.

On the first disk in the array, create a new partition:

  • 1 GiB partition
  • Filesystem: EFI System Partition
  • Label: ESP
  • Mountpoint: /boot/efi

For each subsequent disk in the array, create a new partition, replacing "X" with the number of the disk in the sequence (sdb = 1, sdc = 2, ...):

  • 1 GiB partition
  • Filesystem: EFI System Partition
  • Label: ESPX
  • Mountpoint: /boot/efiX

These additional ESPs are placeholders. The contents of the first ESP will need to be synchronized to them for them to function as backups in the event that the first disk is lost. Unfortunately, there is no support for ESPs on any software RAID.

Create a software RAID device for /boot:

  • Select the free space on one of the disks and click the "+" button.
  • Select the "Software RAID" device type.
  • Place a checkbox next to every disk in the array.
  • Select RAID level "raid1".
  • Size the partition to 1 GiB.
  • Filesystem: xfs
  • Label: boot
  • Name: boot
  • Mountpoint: /boot

Create the LVM volume group:

  • Select the free space on one of the disks and click the "+" button.
  • Select the "LVM2 Volume Group" device type.
  • Place a checkbox next to every disk in the array.
  • Give the volume group a name, such as "servername_vg00" or "vg00".
  • If desired and appropriate, check the "Encrypt" option.
  • Hit "OK".

Select the new volume group in the left pane, then create the logical volumes in it. Do not allow the total size of the volumes to exceed the capacity of one disk. Be conservative in sizing, as you can enlarge the volumes later if needed.

For "/":

  • Filesystem: xfs
  • Label: sysroot
  • Name: sysroot
  • Mountpoint: /

For swap:

  • Filesystem: swap
  • Label: swap
  • Name: swap

There are many opinions on what the right amount of swap space is. In reality, there are many factors that may come into play. Red Hat's guidance is here:

See the "Additional Volumes" section, below.

Make any other needed customizations.

RAID: Further Setup Required

At this point, the volumes are not protected by RAID. They are linear volumes. RAID will be configured after installation.

When saving this configuration, the installer will issue a warning as a result of the ESP (the "stage1 device") not being on an array. Ignore this warning for now -- synchronizing the ESP to the other disks will have to be handled differently.

Additional Volumes

In many cases, additional volumes will be appropriate. The following volumes are good candidates for placement on separate volumes, and this may even be required by some policies:

  • /home
  • /tmp
  • /var
  • /var/log
  • /var/log/audit

Depending upon how the system will be utilized, some other paths to consider placing on separate volumes may include:

  • /opt
  • /var/lib/libvirt

Post-Install

If the system does not support UEFI and is configured for legacy BIOS booting, run "grub2-install" against each system disk.

grub2-install /dev/sdb

Modify the /etc/hosts file to reflect the correct hostname and domain.

Modify the /etc/aliases file to provide an email alias for the root user.

Connect the system to the correct configuration management solution.

Virtual Machine Hosts

The following bash snippet can be used to set up a bridge. Replace "enp8s0" with the correct physical interface. If you need a static IP configured on the interface, then uncomment and update the corresponding lines. This script will temporarily disrupt connections on the interface, so be careful if running it remotely.

dnf -y install bridge-utils
export MAIN_CONN=enp8s0
bash -x <<EOS
  nmcli c delete "$MAIN_CONN"
  nmcli c delete "Wired connection 1"
  nmcli c add type bridge ifname br0 autoconnect yes con-name br0 stp off
  #nmcli c modify br0 ipv4.addresses 192.168.1.100/24 ipv4.method manual
  #nmcli c modify br0 ipv4.gateway 192.168.1.1
  #nmcli c modify br0 ipv4.dns 192.168.1.1
  nmcli c add type bridge-slave autoconnect yes con-name "$MAIN_CONN" ifname "$MAIN_CONN" master br0
EOS
nmcli

nmcli supports shortened versions of many of its parameters, such as "c" for "connection" above and "mod" for "modify" below. See the "NOTES" section from man nmcli for more.

If you need to put the new bridge interface in a different firewall zone ("work", in this example):

nmcli c mod br0 connection.zone work

Tags: