aboutsummaryrefslogtreecommitdiffstats
path: root/community/cloud-init/README.Alpine
diff options
context:
space:
mode:
Diffstat (limited to 'community/cloud-init/README.Alpine')
-rw-r--r--community/cloud-init/README.Alpine426
1 files changed, 281 insertions, 145 deletions
diff --git a/community/cloud-init/README.Alpine b/community/cloud-init/README.Alpine
index 6aaa0e42a4b..40cb53dc30d 100644
--- a/community/cloud-init/README.Alpine
+++ b/community/cloud-init/README.Alpine
@@ -1,58 +1,229 @@
-
After the cloud-init package is installed you will need to run the
"setup-cloud-init" command to prepare the OS for cloud-init use.
This command will enable cloud-init's init.d services so that they are run
-upon future boots/reboots. It also enables eudev's init.d services as udev is
-used by cloud-init for both disk configuration and network card persistent
-naming.
+upon future boots/reboots.
+
+
+** IMPORTANT **
+---------------
+
+To avoid the Alpine package having a large number of dependencies that in
+many cases would be unnecessary the package only has a general set of
+dependencies defined that are common for the majority of use cases.
+
+When creating an Alpine OS image that makes use of cloud-init you should
+read the rest of this document, set a DataSources list to only enable
+whichever DataSource(s) you require, and install any additionally required
+Alpine packages depending on the filesystem type, DataSource(s), etc in
+use. Information related to this is contained in the rest of this document.
+
+
+General configuration customisation
+-----------------------------------
+
+Whilst the /etc/cloud/cloud.cfg file can be modified it is better practice
+to instead create a new file, or files, in the /etc/cloud/cloud.cfg.d/
+directory to override settings from cloud.cfg.
+
+
+Customising the enabled Data Sources list
+-----------------------------------------
+
+Whenever cloud-init runs on first boot it determines which DataSource is to
+be used for performing machine configuration, it does so by reading the
+"datasource_list" definition in either the /etc/cloud/cloud.cfg file or
+other files in /etc/cloud/cloud.cfg.d/ directory.
+
+By default this Alpine package sets "datasource_list" in the
+/etc/cloud/cloud.cfg file with a list of all DataSources *except*
+CloudSigma and SmartOS, this is because those 2 DataSources require the
+py3-pyserial package to be installed, which is not defined as a dependency
+of the Alpine cloud-init package.
+
+If you are creating a cloud-init enabled Alpine image for a specific
+environment, or a known set of environments, then it would make sense to
+add a "datasource_list" value to only list whichever DataSource is
+required/expected as this will potentially result in (slightly) faster
+first boot times as each individually enabled DataSource would not have to
+be checked in sequence until the required one is reached.
+
+For example if creating an Alpine image only for use with AWS then creating
+a file /etc/cloud/cloud.cfg.d/01-datasource.cfg with the following contents
+would make sense:
+
+ datasource_list: ['Ec2']
+
+whereas if the image was intended to be used for either of AWS, Azure, or
+Google Cloud then:
+
+ datasource_list: ['Ec2', 'Azure', 'GCE', 'None']
+
+and for an image to only be used with ISO or disk configuration files:
+
+ datasource_list: ['NoCloud']
+
+
+Additional Alpine packages required by specific DataSources
+-----------------------------------------------------------
+
+Cloud-init originally assumed the use of dhclient for DHCP purposes.
+However as of cloud-init 23.3 there is some support for both dhcpcd and
+udhcpc. Therefore no specific dhcp client software has been set as a hard
+dependancy for the Alpine cloud-init package.
+
+However if you are using a DataSource (i.e. AWS Ec2, Azure, GCE, Hetzner,
+NWCS, OpenStack, Oracle, Scaleway, UpCloud, and Vultr) that needs to set up
+an Ephemeral (temporary) DHCPv4 connection in order to talk to a Metadata
+Server (to retrieve metadata, network configuration, and user-data
+information from) then these DataSources may expect "dhclient" to be installed
+and may try and execute it directly and therefore retrieving information
+from the metadata server may fail if dhclient is not installed).
+
+ Data Source Package(s) you should add
+ ---------------------------------------------------------
+ Azure dhclient, py3-passlib
+ CloudSigma & SmartOS/Joyent py3-pyserial
+ Ec2 dhclient, eudev
+ GCE dhclient
+ Hetzner dhclient
+ MAAS py3-oauthlib
+ NWCS dhclient
+ OpenStack dhclient, eudev
+ Oracle dhclient
+ Scaleway dhclient
+ UpCloud dhclient
+ VMware py3-netifaces
+ Vultr dhclient
+
+
+Additional Alpine packages required by specific cloud-init modules
+------------------------------------------------------------------
+
+If you intend to use the following modules then you should install
+additional Alpine packages:
+
+ cc_ca_certs ca-certificates
+
+ cc_disk_setup blockdev
+ lsblk
+ parted
+ sfdisk (for MBR partitioning)
+ sgdisk (for GPT partitioning)
+ wipefs (if you wish to delete existing partitions)
+
+ cc_ssh_import_id ssh-import-id
+
+If you want to create/resize filesystems using cc_disk_setup and/or
+cc_resizefs then you will need to install the relevant package(s)
+containing the appropriate tools:
+
+ BTRFS: btrfs-progs
+ EXT2/3/4: e2fsprogs-extra
+ F2FS: f2fs-tools
+ LUKS: cryptsetup and device-mapper
+ LVM: lvm2 and device-mapper
+ XFS: xfsprogs and xfsprogs-extra
+ ZFS: zfs
+
+
+Additional Alpine packages you may want or need to manually install
+-------------------------------------------------------------------
+
+ doas
+
+ Alpine v3.15+, see the section "Sudo & Doas" below.
+
+ mdevd or eudev
+
+ see the section "mdev / mdevd / eudev" below.
+
+ mount (Alpine Edge & v3.17+) or util-linux-misc (Alpine v3.15 & v3.16)
+ or util-linux (Alpine <= v3.14)
+
+ see the section "Using ISO images for cloud-init configuration (i.e.
+ with NoCloud/ConfigDrive)" below.
+
+ openssh-server or openssh-server-pam
+
+ see the section "Unable to SSH in as user(s) as the account(s) is/are
+ locked" below.
+
+ sudo
+ see the section "Sudo & Doas" below.
+
+
+mdev / mdevd / eudev
+--------------------
+
+As of Alpine release v3.17 (and Edge from mid-Nov 2022 onwards) the Alpine
+cloud-init package can be used together with either eudev (aka udev), mdev,
+or mdevd.
+
+Due to this change the cloud-init package from Alpine v3.17 onwards no
+longer declares a dependency on the eudev package - you will need to
+manually install either eudev or mdevd to your Alpine disk image if
+required. You may also need to run "setup-devd mdevd" or "setup-devd udev"
+accordingly to enable those instead of mdev.
+
+Upstream cloud-init is only designed and tested to work with udev and
+therefore some cloud-init functionality may not work when either mdev or
+mdevd are used instead. This document will, over time, detail any known
+mdev or mdevd incompatible cloud-init modules and/or Data Sources.
Sudo & Doas
-----------
Cloud-init has always had support for 'sudo' when adding users (via
-user-data). As Alpine is now moving towards preferring the use of 'doas'
+user-data). As Alpine has now moved towards preferring the use of 'doas'
rather than 'sudo' support for 'doas' has been added to the cc_users_groups
module.
-As a result, the Alpine cloud-init package no longer declares a dependency
-on sudo - you must ensure to install either the 'doas' or 'sudo' package (or
-indeed both), depending on which you wish to use, in order to be able to
-create users that can run commands as a privileged user.
-
+As a result, the Alpine cloud-init package from Alpine v3.15 onwards no
+longer declares a dependency on sudo - you must ensure to install either
+the 'doas' or 'sudo' package (or indeed both), depending on which you wish
+to use, in order to be able to create users that can run commands as a
+privileged user.
Doas
----
-The cloud-init support for Doas has not (yet) been upstreamed, it only
-exists in the Alpine cloud-init package. This functionality performs some
-basic sanity checks of per-user Doas rules - it double-checks that the user
-referred to in any such rules corresponds to the user the rule(s) is/are
-defined for - if not then an error appears during 1st boot and no Doas rules
-are added for that user.
+The cloud-init doas functionality performs some basic sanity checks of
+per-user Doas rules - it double-checks that the user referred to in any
+such rules corresponds to the user the rule(s) is/are defined for - if
+not then an error appears during 1st boot and no Doas rules are added
+for that user.
It is recommended that you set up a Doas rule: "permit persist :wheel" so
that all members of group 'wheel' can have root access. The default
-/etc/doas.d/doas.conf file has such a rule but it is commented out. The
-/cloud-init global configuration file /etc/cloud/cloud.cfg defines the default
-'alpine' user to be created upon 1st boot with a Doas rule giving root access.
+/etc/doas.conf file has such a rule but it is commented out.
+
+The cloud-init global configuration file /etc/cloud/cloud.cfg defines the
+default 'alpine' user to be created upon 1st boot with a Doas rule giving
+password-less root access - as the 'alpine' user is locked for password
+access, both local and remote, and so only SSH key-based access is
+available then obviously password-based Doas access would not work.
To setup Doas rules for additional users, both existing and new, in the
user-data add a 'doas' entry to define one or more rules to be setup for
-each, for example:
+each user, for example:
users:
- default
- name: tester
- doas: ["permit tester as root"]
+ doas:
+ - permit tester as root
-When cloud-init runs on 1st boot it creates the file
-/etc/doas.d/cloud-init.conf containing all the per-user rules specified in
+When cloud-init runs on 1st boot it appends entries to the file
+/etc/doas.conf containing all the per-user rules specified in
user-data, as well as the rule for the default 'alpine' user.
+NOTE: if you specify a "users:" section in user-data you MUST specify a
+"default" entry in addition to any other entries if you want the default
+'alpine' user to be created.
NTP
@@ -75,15 +246,16 @@ To use Chrony as the NTP client:
Specify a ntp section in your cloud-init User Data like so:
ntp:
- pool:
+ pools:
- 0.uk.pool.ntp.org
- 1.uk.pool.ntp.org
- If you do not specify any pool or servers then 0.pool.ntp.org ->
+ If you do not specify any pools or servers then 0.pool.ntp.org ->
3.pool.ntp.org will be used.
- The file /etc/cloud/templates/chrony.conf.alpine.tmpl is used by cloud-init
- as a template to create the configuration file /etc/chrony/chrony.conf.
+ The file /etc/cloud/templates/chrony.conf.alpine.tmpl is used by
+ cloud-init as a template to create the configuration file
+ /etc/chrony/chrony.conf.
To use Busybox as the NTP client:
@@ -128,27 +300,75 @@ Version 21.3 of cloud-init has added some support for network interface
hotplugging, that is the addition or removal of additional network
interfaces on an already running machine.
-A simple daemon (cloud-init-hotplugd) runs at boot time that listens for udev
-hotplug events and triggers cloud-init to look at them.
+A simple daemon, cloud-init-hotplugd, if enabled runs at boot time that
+listens for udev hotplug events and triggers cloud-init to react to them.
This daemon, via its init.d script, is *not* at present enabled by the
setup-cloud-init script as hotplug is currently *only* supported by the
-ConfigDrive, EC2, and OpenStack DataSources.
+'ConfigDrive', 'Ec2', and 'OpenStack' DataSources.
In order to make use of network hotplug you will need to do the following
-*two* things:
+*three* things:
+
+ - install and enable the Alpine "eudev" package for managing devices.
+
+ - add the /etc/init.d/cloud-init-hotplugd script to the "default"
+ run-level, i.e.
+
+ rc-update add cloud-init-hotplugd default
+
+ - enable hotplug for the relevant DataSource by adding the following to
+ either the /etc/cloud.cfg file, or to a newly created file in the
+ /etc/cloud.cfg.d directory, or else to the supplied user-data:
+
+ updates:
+ network:
+ when: ['boot','hotplug']
+
+
+Limitations
+===========
+
+Unable to resize LVM or LUKS rootfs partitions and filesystems on them
+----------------------------------------------------------------------
+
+cloud-init's cc_growpart module, if enabled, attempts to resize the root
+partition. If however the rootfs is on a /dev/mapper/ devices (whether LVM,
+or LUKS, or LVM-on-LUKS, or LUKS-on-LVM, cc_growpart will be unable to
+resize the partition - this is due to a difference in behaviour between
+mdev (always used by Alpine's initramfs **even if eudev is used by the rest
+of the OS once the rootfs is mounted**) and eudev.
+
+When the initramfs' init runs it will trigger mdev to create a /dev/mapper/
+entry, if required, for the rootfs. With mdev this entry is an actual file,
+whereas eudev would create it as a softlink to the underlying /dev/dm-*
+device. Cloud-init's cc_growpart attempts to determine the device that
+provides the rootfs and, due to this mdev limitation, believes this is
+/dev/mapper/<whatever> rather that the actual /dev/dm-<whatever> device.
+The growpart utility can resize /dev/dm-* devices.
+
+This issues affects partition growing/fs resizing regardless of whether
+udev, mdev, or mdevd is enabled in the Alpine disk image - Alpine's
+initramfs' init *always* runs mdev regardless.
-- firstly, add the /etc/init.d/cloud-init-hotplugd script to the "default"
-run-level, i.e.
+As a workaround, in user-data provided to the VM the runcmd module can be
+used to perform any resizing as follows:
- rc-update add cloud-init-hotplugd default
+For a LUKS-encrypted rootfs (assuming it is on a LUKS partition labelled
+"lukspart" and is device /dev/sda2):
-- secondly, enable hotplug for the relevant DataSource by adding the following
-to either the /etc/cloud.cfg file or else to the supplied user-data:
+ runcmd:
+ - growpart /dev/sda2
+ - cryptsetup resize lukspart
+ - resize2fs /dev/mapper/lukspart
- updates:
- network:
- when: ['boot','hotplug']
+For a LVM-based rootfs (assuming it is on a LVM VG called "rootfs" in VG
+"vg0" with /dev/sda2 being the partition LVM is using):
+
+ runcmd:
+ - growpart /dev/sda2
+ - lvextend -l +100%FREE /dev/mapper/vg0-rootfs
+ - resize2fs /dev/mapper/vg0-rootfs
@@ -166,107 +386,39 @@ key-based SSH logins.
Solution: install the openssh-server-pam package (rather than
openssh-server) and edit /etc/ssh/sshd_config to ensure that it defines
-"EnablePAM yes".
+"UsePAM yes".
In the future this package may add "openssh-server-pam" as a dependency but
as it is possible some individuals may wish to use cloud-init without any
SSH daemon installed that decision is unclear.
-Missing dependencies
---------------------
-
-The cloud-init package declares dependencies for only commonly used
-cloud-init modules - if deps for all supported modules were defined then the
-dependency list would be quite large.
-
-As a result when building cloud-init based disk images you may need to
-manually install some packages required by some cloud-init modules.
-
-The following modules should work, in general, with the defined dependencies:
-
- cc_bootcmd
- cc_ca_certs
- cc_debug
- cc_disable_ec2_metadata
- cc_disk_setup
- cc_final_message
- cc_growpart
- cc_key_to_console
- cc_locale
- cc_migrator
- cc_mount
- cc_package_update_upgrade_install
- cc_phone_home
- cc_power_state_change
- cc_resizefs
- cc_resolv_conf
- cc_rsyslog
- cc_runcmd
- cc_scripts_per_boot
- cc_scripts_per_instance
- cc_scripts_per_once
- cc_scripts_user
- cc_scripts_vendor
- cc_seed_random
- cc_set_hostname
- cc_set_passwords
- cc_ssh
- cc_ssh_authkey_fingerprints
- cc_timezone
- cc_update_etc_hosts
- cc_update_hostname
- cc_users_groups
- cc_write_files
-
-If you want to delete existing partitions using cc_disk_setup then you will
-need to install the Alpine "wipefs" package.
-
-If you want to create/resize filesystems using cc_disk_setup and/or
-cc_resizefs then you will need to install the relevant package(s) containing
-the appropriate tools:
-
- BTRFS: btrfs-progs
- EXT2/3/4: e2fsprogs-extra
- F2FS: f2fs-tools
- LVM: lvm2
- XFS: xfsprogs and xfsprogs-extra
- ZFS: zfs
-
-
-cc_ca_certs module
-------------------
-
-The remove-defaults option of the cloud-init cc_ca_certs module does not
-currently work correctly. This option will delete certificates installed by
-the Alpine ca-certificates package as expected. However the certificates
-provided by the ca-certificates-bundle package, which is always automatically
-installed in an Alpine system due to it being a dependency of a base package,
-are not deleted.
-
-
Using ISO images for cloud-init configuration (i.e. with NoCloud/ConfigDrive)
-----------------------------------------------------------------------------
-With the removal of the util-linux dependency from the Alpine cloud-init
-package the "mount" command provided by Busybox will be used instead.
+The "mount" command in Alpine is provided by the "busybox" package by
+default.
+
+The "mount" package (Alpine v3.17+), "util-linux-misc" package (Alpine
+v3.15 & v3.16), or "util-linux" package (Alpine <= v3.14) provides an
+alternative, full featured, version of the "mount" command.
Cloud-init makes use of the mount command's "-t auto" option to mount a
filesystem containing cloud-init configuration data (detected by searching
for a filesystem with the label "cidata"). Busybox's mount command behaves
differently to that of util-linux's when the "-t auto" option is used,
-specifically if the kernel module for the required filesystem is not already
-loaded the util-linux mount command will trigger it to be loaded and so the
-mount will succeed. However Busybox's mount will not normally trigger a kernel
-module load and the mount will fail!
+specifically if the kernel module for the required filesystem is not
+already loaded the util-linux mount command will trigger it to be loaded
+and so the mount will succeed. However Busybox's mount will not normally
+trigger a kernel module load and the mount will fail!
-When this problem occurs the following will be displayed on the console
-during boot:
+When this problem occurs a message such as the following will be displayed
+on the console during boot:
util.py[WARNING]: Failed to mount /dev/vdb when looking for data
-If cloud-init debugging is enabled then the file /var/log/cloud-init.log will
-also contain the following entries:
+If cloud-init debugging is enabled then the file /var/log/cloud-init.log
+will also contain the following entries:
subp.py[DEBUG]: Running command ['mount', '-o', 'ro', '-t', 'auto',
'/dev/vdb', '/run/cloud-init/tmp/tmpAbCdEf'] with allowed return codes [0]
@@ -283,29 +435,13 @@ also contain the following entries:
There are 2 possible solutions to this issue, either:
-(1) Install the util-linux package into the Alpine image used with
-cloud-init.
+(1) Install the mount (Edge and v3.17+) / util-linux-misc (v3.15 & v3.16) /
+ util-linux (<= v.3.14) package in the Alpine image used with
+ cloud-init.
or:
-(2) Create (or modify) the file /etc/filesystem and ensure it has a line
-present with the name of the required kernel module for the relevant filesystem
-i.e. "iso9660". This will ensure that Busybox's mount will trigger the loading
-of this kernel module.
-
-
-CloudSigma and SmartOS data sources
------------------------------------
-
-If you are using either the CloudSigma or SmartOS/Joyent Cloud data sources
-then you will need to install the Alpine py3-pyserial package. This was removed
-as a cloud-init (hard) dependency as it is only used by these two uncommon
-Data Sources.
-
-
-MAAS data source
-----------------
-
-If you are using the MAAS data source then you will need to install the
-Alpine py3-oauthlib package. This was removed as a cloud-init (hard)
-dependency as it is only used by the MAAS Data Source.
+(2) Create (or modify) the file /etc/filesystems and ensure it has a line
+ present with the name of the required kernel module for the relevant
+ filesystem i.e. "iso9660". This will ensure that Busybox's mount will
+ trigger the loading of this kernel module.