Merge remote-tracking branch 'origin/trunk' into xry111/arm64

This commit is contained in:
Xi Ruoyao 2022-10-02 21:49:39 +08:00
commit 96323bd9fc
No known key found for this signature in database
GPG Key ID: ACAAD20E19E710E3
28 changed files with 466 additions and 345 deletions

View File

@ -34,7 +34,7 @@
# Default-Start: S
# Default-Stop:
# Short-Description: Checks local filesystems before mounting.
# Description: Checks local filesystmes before mounting.
# Description: Checks local filesystems before mounting.
# X-LFS-Provided-By: LFS
### END INIT INFO

View File

@ -55,7 +55,7 @@ case "${1}" in
stop)
# Don't unmount virtual file systems like /run
log_info_msg "Unmounting all other currently mounted file systems..."
# Ensure any loop devies are removed
# Ensure any loop devices are removed
losetup -D
umount --all --detach-loop --read-only \
--types notmpfs,nosysfs,nodevtmpfs,noproc,nodevpts >/dev/null

View File

@ -183,7 +183,7 @@ fi
# Start all services marked as S in this runlevel, except if marked as
# S in the previous runlevel
# it is the responsabily of the script to not try to start an already running
# it is the responsibility of the script to not try to start an already running
# service
for i in $( ls -v /etc/rc.d/rc${runlevel}.d/S* 2> /dev/null)
do

View File

@ -45,7 +45,7 @@ case "${1}" in
# if it is possible to use killproc
killproc fully_qualified_path
# if it is not possible to use killproc
# (the daemon shoudn't be stopped by killing it)
# (the daemon shouldn't be stopped by killing it)
if pidofproc daemon_name_as_reported_by_ps >/dev/null; then
command_to_stop_the_service
fi

View File

@ -155,7 +155,7 @@ start_daemon()
fi
# Return a value ONLY
# It is the init script's (or distribution's functions) responsibilty
# It is the init script's (or distribution's functions) responsibility
# to log messages!
case "${retval}" in
@ -271,7 +271,7 @@ killproc()
fi
# Return a value ONLY
# It is the init script's (or distribution's functions) responsibilty
# It is the init script's (or distribution's functions) responsibility
# to log messages!
case "${retval}" in

View File

@ -21,7 +21,7 @@
# dev creates a new device
# <devtype> is either block, char or pipe
# block creates a block device
# char creates a character deivce
# char creates a character device
# pipe creates a pipe, this will ignore the <major> and
# <minor> fields
# <major> and <minor> are the major and minor numbers used for

View File

@ -32,7 +32,7 @@
#FAILURE_PREFIX="${FAILURE}*****${NORMAL} "
#WARNING_PREFIX="${WARNING} *** ${NORMAL} "
# Manually seet the right edge of message output (characters)
# Manually set the right edge of message output (characters)
# Useful when resetting console font during boot to override
# automatic screen width detection
#COLUMNS=120

View File

@ -40,6 +40,48 @@
appropriate for the entry or if needed the entire day's listitem.
-->
<listitem>
<para>2022-10-01</para>
<itemizedlist>
<listitem>
<para>[bdubbs] - Update to iana-etc-20220922. Addresses
<ulink url="&lfs-ticket-root;5006">#5006</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to tzdata-2022d. Fixes
<ulink url="&lfs-ticket-root;5119">#5119</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to readline-8.2. Fixes
<ulink url="&lfs-ticket-root;5121">#5121</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to linux-5.19.12. Fixes
<ulink url="&lfs-ticket-root;5115">#5115</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to libffi-3.4.3. Fixes
<ulink url="&lfs-ticket-root;5116">#5116</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to libcap-2.66. Fixes
<ulink url="&lfs-ticket-root;512">#5120</ulink>.</para>
</listitem>
<listitem revision="systemd">
<para>[bdubbs] - Update to dbus-1.14.2. Fixes
<ulink url="&lfs-ticket-root;5123">#5123</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to bc-6.0.4. Fixes
<ulink url="&lfs-ticket-root;5114">#5114</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to bash-5.2. Fixes
<ulink url="&lfs-ticket-root;5122">#5122</ulink>.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>2022-09-22</para>
<itemizedlist>

View File

@ -11,6 +11,14 @@
<title>What's new since the last release</title>
<para>In 11.3 release, <parameter>--enable-default-pie</parameter>
and <parameter>--enable-default-ssp</parameter> are enabled for GCC.
They can mitigate some type of malicious attacks but they cannot provide
a full protection. In case if you are reading a programming textbook,
you may need to disable PIE and SSP with GCC options
<parameter>-fno-pie -no-pie -fno-stack-protection</parameter>
because some textbooks assume they were disabled by default.</para>
<para>Below is a list of package updates made since the previous
release of the book.</para>
@ -38,9 +46,9 @@
<!--<listitem>
<para>Automake-&automake-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Bash &bash-version;</para>
</listitem>-->
</listitem>
<listitem>
<para>Bc &bc-version;</para>
</listitem>
@ -62,9 +70,9 @@
<!--<listitem>
<para>DejaGNU-&dejagnu-version;</para>
</listitem>-->
<!--<listitem revision="systemd">
<listitem revision="systemd">
<para>D-Bus-&dbus-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Diffutils-&diffutils-version;</para>
</listitem>-->
@ -122,9 +130,9 @@
<!--<listitem>
<para>Gzip-&gzip-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>IANA-Etc-&iana-etc-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Inetutils-&inetutils-version;</para>
</listitem>-->
@ -149,15 +157,15 @@
<!--<listitem>
<para>LFS-Bootscripts-&lfs-bootscripts-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Libcap-&libcap-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Libelf-&elfutils-version; (from elfutils)</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Libffi-&libffi-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Libpipeline-&libpipeline-version;</para>
</listitem>-->
@ -218,9 +226,9 @@
<listitem>
<para>Python-&python-version;</para>
</listitem>
<!--<listitem>
<listitem>
<para>Readline-&readline-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Sed-&sed-version;</para>
</listitem>-->
@ -245,9 +253,9 @@
<!--<listitem>
<para>Texinfo-&texinfo-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Tzdata-&tzdata-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Util-Linux-&util-linux-version;</para>
</listitem>-->

View File

@ -15,6 +15,11 @@
the file system is mounted at the directory specified by the
<envar>LFS</envar> environment variable described in the previous section.
</para>
<para>Strictly speaking, one cannot "mount a partition". One mounts the <emphasis>file
system</emphasis> embedded in that partition. But since a single partition can't contain
more than one file system, people often speak of the partition and the
associated file system as if they were one and the same.</para>
<para>Create the mount point and mount the LFS file system with these commands:</para>

View File

@ -104,4 +104,14 @@ popd</userinput></screen>
<para>This check can be used after retrieving the needed files with any of the
methods listed above.</para>
<para>If the packages and patches are downloaded as a non-&root; user,
these files will be owned by the user. The file system records the
owner by its UID, and the UID of a normal user in the host distro is
not assigned in LFS. So the files will be left owned by an unnamed UID
in the final LFS system. If you won't assign the same UID for your user
in the LFS system, change the owners of these files to &root; now to
avoid this issue:</para>
<screen role="nodump"><userinput>chown root:root $LFS/sources/*</userinput></screen>
</sect1>

View File

@ -13,25 +13,25 @@
<para>Many people would like to know beforehand approximately how long
it takes to compile and install each package. Because Linux From
Scratch can be built on many different systems, it is impossible to
provide accurate time estimates. The biggest package (Glibc) will
provide absolute time estimates. The biggest package (Glibc) will
take approximately 20 minutes on the fastest systems, but could take
up to three days on slower systems! Instead of providing actual times,
the Standard Build Unit (SBU) measure will be
used instead.</para>
<para>The SBU measure works as follows. The first package to be compiled
from this book is binutils in <xref linkend="chapter-cross-tools"/>. The
time it takes to compile this package is what will be referred to as the
Standard Build Unit or SBU. All other compile times will be expressed relative
to this time.</para>
is binutils in <xref linkend="chapter-cross-tools"/>. The
time it takes to compile this package is what we will refer to as the
Standard Build Unit or SBU. All other compile times will be expressed in
terms of this unit of time.</para>
<para>For example, consider a package whose compilation time is 4.5
SBUs. This means that if a system took 10 minutes to compile and
SBUs. This means that if your system took 10 minutes to compile and
install the first pass of binutils, it will take
<emphasis>approximately</emphasis> 45 minutes to build this example package.
Fortunately, most build times are shorter than the one for binutils.</para>
<emphasis>approximately</emphasis> 45 minutes to build the example package.
Fortunately, most build times are shorter than one SBU.</para>
<para>In general, SBUs are not entirely accurate because they depend on many
<para>SBUs are not entirely accurate because they depend on many
factors, including the host system's version of GCC. They are provided here
to give an estimate of how long it might take to install a package, but the
numbers can vary by as much as dozens of minutes in some cases.</para>
@ -45,15 +45,15 @@
<screen role="nodump"><userinput>export MAKEFLAGS='-j4'</userinput></screen>
<para>or just building with:</para>
<para>or by building with:</para>
<screen role="nodump"><userinput>make -j4</userinput></screen>
<para>When multiple processors are used in this way, the SBU units in the
book will vary even more than they normally would. In some cases, the make
step will simply fail. Analyzing the output of the build process will also
be more difficult because the lines of different processes will be
interleaved. If you run into a problem with a build step, revert back to a
be more difficult because the lines from different processes will be
interleaved. If you run into a problem with a build step, revert to a
single processor build to properly analyze the error messages.</para>
</note>

View File

@ -27,21 +27,21 @@
<note>
<para>Running the test suites in <xref linkend="chapter-cross-tools"/>
and <xref linkend="chapter-temporary-tools"/>
is impossible, since the programs are compiled with a cross-compiler,
so are not supposed to be able to run on the build host.</para>
is pointless; since the test programs are compiled with a cross-compiler,
they probably can't run on the build host.</para>
</note>
<para>A common issue with running the test suites for binutils and GCC
is running out of pseudo terminals (PTYs). This can result in a high
is running out of pseudo terminals (PTYs). This can result in a large
number of failing tests. This may happen for several reasons, but the
most likely cause is that the host system does not have the
<systemitem class="filesystem">devpts</systemitem> file system set up
correctly. This issue is discussed in greater detail at
<ulink url="&lfs-root;lfs/faq.html#no-ptys"/>.</para>
<para>Sometimes package test suites will fail, but for reasons which the
<para>Sometimes package test suites will fail for reasons which the
developers are aware of and have deemed non-critical. Consult the logs located
at <ulink url="&test-results;"/> to verify whether or not these failures are
expected. This site is valid for all tests throughout this book.</para>
expected. This site is valid for all test suites throughout this book.</para>
</sect1>

View File

@ -14,9 +14,9 @@
making a single mistake can damage or destroy a system. Therefore,
the packages in the next two chapters are built as an unprivileged user.
You could use your own user name, but to make it easier to set up a clean
working environment, create a new user called <systemitem
working environment, we will create a new user called <systemitem
class="username">lfs</systemitem> as a member of a new group (also named
<systemitem class="groupname">lfs</systemitem>) and use this user during
<systemitem class="groupname">lfs</systemitem>) and run commands as &lfs-user; during
the installation process. As <systemitem class="username">root</systemitem>,
issue the following commands to add the new user:</para>
@ -24,7 +24,7 @@
useradd -s /bin/bash -g lfs -m -k /dev/null lfs</userinput></screen>
<variablelist>
<title>The meaning of the command line options:</title>
<title>This is what the command line options mean:</title>
<varlistentry>
<term><parameter>-s /bin/bash</parameter></term>
@ -54,7 +54,7 @@ useradd -s /bin/bash -g lfs -m -k /dev/null lfs</userinput></screen>
<term><parameter>-k /dev/null</parameter></term>
<listitem>
<para>This parameter prevents possible copying of files from a skeleton
directory (default is <filename class="directory">/etc/skel</filename>)
directory (the default is <filename class="directory">/etc/skel</filename>)
by changing the input location to the special null device.</para>
</listitem>
</varlistentry>
@ -68,34 +68,34 @@ useradd -s /bin/bash -g lfs -m -k /dev/null lfs</userinput></screen>
</variablelist>
<para>To log in as <systemitem class="username">lfs</systemitem> (as opposed
to switching to user <systemitem class="username">lfs</systemitem> when logged
in as <systemitem class="username">root</systemitem>, which does not require
the <systemitem class="username">lfs</systemitem> user to have a password),
give <systemitem class="username">lfs</systemitem> a password:</para>
<para>If you want to log in as &lfs-user; or switch to &lfs-user; from a
non-&root; user (as opposed to switching to user &lfs-user;
when logged in as &root;, which does not require the &lfs-user; user to
have a password), you need to set a password of &lfs-user;. Issue the
following command as the &root; user to set the password:</para>
<screen role="nodump"><userinput>passwd lfs</userinput></screen>
<para>Grant <systemitem class="username">lfs</systemitem> full access to
all directories under <filename class="directory">$LFS</filename> by making
<systemitem class="username">lfs</systemitem> the directory owner:</para>
all the directories under <filename class="directory">$LFS</filename> by making
<systemitem class="username">lfs</systemitem> the owner:</para>
<screen><userinput>chown -v lfs $LFS/{usr{,/*},lib,var,etc,bin,sbin,tools}</userinput></screen>
<note><para>In some host systems, the following command does not complete
properly and suspends the login to the &lfs-user; user to the background.
<note><para>In some host systems, the following <command>su</command> command does not complete
properly and suspends the login for the &lfs-user; user to the background.
If the prompt "lfs:~$" does not appear immediately, entering the
<command>fg</command> command will fix the issue.</para></note>
<para>Next, login as user <systemitem class="username">lfs</systemitem>.
This can be done via a virtual console, through a display manager, or with
the following substitute/switch user command:</para>
<para>Next, start a shell running as user &lfs-user;. This can be done by
logging in as &lfs-user; on a virtual console, or with the following
substitute/switch user command:</para>
<screen role="nodump"><userinput>su - lfs</userinput></screen>
<para>The <quote><parameter>-</parameter></quote> instructs
<command>su</command> to start a login shell as opposed to a non-login shell.
The difference between these two types of shells can be found in detail in
The difference between these two types of shells is described in detail in
<filename>bash(1)</filename> and <command>info bash</command>.</para>
</sect1>

View File

@ -10,8 +10,9 @@
<title>Creating a limited directory layout in LFS filesystem</title>
<para>The next task to be performed in the LFS partition is to create a limited
directory hierarchy, so that the programs compiled in <xref
<para>In this section, we begin populating the LFS filesystem with the
pieces that will constitute the final Linux system. The first step is to
create a limited directory hierarchy, so that the programs compiled in <xref
linkend="chapter-temporary-tools"/> (as well as glibc and libstdc++ in <xref
linkend="chapter-cross-tools"/>) can be installed in their final
location. We do this so those temporary programs will be overwritten when

View File

@ -19,8 +19,10 @@
<literal>exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash</literal>
EOF</userinput></screen>
<para>When logged on as user <systemitem class="username">lfs</systemitem>,
the initial shell is usually a <emphasis>login</emphasis> shell which reads
<para>When logged on as user <systemitem class="username">lfs</systemitem>
or switched to the &lfs-user; user using a <command>su</command> command
with <quote><parameter>-</parameter></quote> option,
the initial shell is a <emphasis>login</emphasis> shell which reads
the <filename>/etc/profile</filename> of the host (probably containing some
settings and environment variables) and then <filename>.bash_profile</filename>.
The <command>exec env -i.../bin/bash</command> command in the
@ -32,7 +34,7 @@ EOF</userinput></screen>
ensuring a clean environment.</para>
<para>The new instance of the shell is a <emphasis>non-login</emphasis>
shell, which does not read, and execute, the contents of <filename>/etc/profile</filename> or
shell, which does not read, and execute, the contents of the <filename>/etc/profile</filename> or
<filename>.bash_profile</filename> files, but rather reads, and executes, the
<filename>.bashrc</filename> file instead. Create the
<filename>.bashrc</filename> file now:</para>
@ -59,10 +61,10 @@ EOF</userinput></screen>
<para>The <command>set +h</command> command turns off
<command>bash</command>'s hash function. Hashing is ordinarily a useful
feature&mdash;<command>bash</command> uses a hash table to remember the
full path of executable files to avoid searching the <envar>PATH</envar>
full path to executable files to avoid searching the <envar>PATH</envar>
time and again to find the same executable. However, the new tools should
be used as soon as they are installed. By switching off the hash function,
the shell will always search the <envar>PATH</envar> when a program is to
be used as soon as they are installed. Switching off the hash function forces
the shell to search the <envar>PATH</envar> whenever a program is to
be run. As such, the shell will find the newly compiled tools in
<filename class="directory">$LFS/tools/bin</filename> as soon as they are
available without remembering a previous version of the same program
@ -115,10 +117,10 @@ EOF</userinput></screen>
<varlistentry>
<term><parameter>PATH=/usr/bin</parameter></term>
<listitem>
<para>Many modern linux distributions have merged <filename
<para>Many modern Linux distributions have merged <filename
class="directory">/bin</filename> and <filename
class="directory">/usr/bin</filename>. When this is the case, the standard
<envar>PATH</envar> variable needs just to be set to <filename
<envar>PATH</envar> variable should be set to <filename
class="directory">/usr/bin/</filename> for the <xref
linkend="chapter-temporary-tools"/> environment. When this is not the
case, the following line adds <filename class="directory">/bin</filename>
@ -141,7 +143,7 @@ EOF</userinput></screen>
standard <envar>PATH</envar>, the cross-compiler installed at the beginning
of <xref linkend="chapter-cross-tools"/> is picked up by the shell
immediately after its installation. This, combined with turning off hashing,
limits the risk that the compiler from the host be used instead of the
limits the risk that the compiler from the host is used instead of the
cross-compiler.</para>
</listitem>
</varlistentry>
@ -195,7 +197,8 @@ EOF</userinput></screen>
</important>
<para>Finally, to have the environment fully prepared for building the
temporary tools, source the just-created user profile:</para>
temporary tools, force the <command>bash</command> shell to read
the new user profile:</para>
<screen><userinput>source ~/.bash_profile</userinput></screen>

View File

@ -10,10 +10,10 @@
<title>Creating Directories</title>
<para>It is time to create the full structure in the LFS file system.</para>
<para>It is time to create the full directory structure in the LFS file system.</para>
<note><para>Some of the directories mentioned in this section may be
already created earlier with explicit instructions or when installing some
<note><para>Some of the directories mentioned in this section may have
already been created earlier with explicit instructions, or when installing some
packages. They are repeated below for completeness.</para></note>
<para>Create some root-level directories that are not in the limited set
@ -42,14 +42,14 @@ install -dv -m 0750 /root
install -dv -m 1777 /tmp /var/tmp</userinput></screen>
<para>Directories are, by default, created with permission mode 755, but
this is not desirable for all directories. In the commands above, two
this is not desirable everywhere. In the commands above, two
changes are made&mdash;one to the home directory of user <systemitem
class="username">root</systemitem>, and another to the directories for
temporary files.</para>
<para>The first mode change ensures that not just anybody can enter
the <filename class="directory">/root</filename> directory&mdash;the
same as a normal user would do with his or her home directory. The
the <filename class="directory">/root</filename> directory&mdash;just
like a normal user would do with his or her own home directory. The
second mode change makes sure that any user can write to the
<filename class="directory">/tmp</filename> and <filename
class="directory">/var/tmp</filename> directories, but cannot remove
@ -59,14 +59,14 @@ install -dv -m 1777 /tmp /var/tmp</userinput></screen>
<sect2>
<title>FHS Compliance Note</title>
<para>The directory tree is based on the Filesystem Hierarchy Standard
<para>This directory tree is based on the Filesystem Hierarchy Standard
(FHS) (available at <ulink
url="https://refspecs.linuxfoundation.org/fhs.shtml"/>). The FHS also specifies
the optional existence of some directories such as <filename
the optional existence of additional directories such as <filename
class="directory">/usr/local/games</filename> and <filename
class="directory">/usr/share/games</filename>. We create only the
directories that are needed. However, feel free to create these
directories. </para>
class="directory">/usr/share/games</filename>. In LFS, we create only the
directories that are really necessary. However, feel free to create more
directories, if you wish. </para>
</sect2>

View File

@ -11,22 +11,22 @@
<title>Introduction</title>
<para>This chapter shows how to build the last missing bits of the temporary
system: the tools needed by the build machinery of various packages. Now
system: the tools needed to build the various packages. Now
that all circular dependencies have been resolved, a <quote>chroot</quote>
environment, completely isolated from the host operating system (except for
the running kernel), can be used for the build.</para>
<para>For proper operation of the isolated environment, some communication
with the running kernel must be established. This is done through the
so-called <emphasis>Virtual Kernel File Systems</emphasis>, which must be
mounted when entering the chroot environment. You may want to check
that they are mounted by issuing <command>findmnt</command>.</para>
with the running kernel must be established. This is done via the
so-called <emphasis>Virtual Kernel File Systems</emphasis>, which will be
mounted before entering the chroot environment. You may want to verify
that they are mounted by issuing the <command>findmnt</command> command.</para>
<para>Until <xref linkend="ch-tools-chroot"/>, the commands must be
run as <systemitem class="username">root</systemitem>, with the
<envar>LFS</envar> variable set. After entering chroot, all commands
are run as &root;, fortunately without access to the OS of the computer
you built LFS on. Be careful anyway, as it is easy to destroy the whole
LFS system with badly formed commands.</para>
LFS system with bad commands.</para>
</sect1>

View File

@ -14,12 +14,14 @@
<primary sortas="e-/dev/">/dev/*</primary>
</indexterm>
<para>Various file systems exported by the kernel are used to communicate to
and from the kernel itself. These file systems are virtual in that no disk
<para>Applications running in user space utilize various file
systems exported by the kernel to communicate
with the kernel itself. These file systems are virtual: no disk
space is used for them. The content of the file systems resides in
memory.</para>
memory. These file systems must be mounted in the $LFS directory tree
so the applications can find them in the chroot environment.</para>
<para>Begin by creating directories onto which the file systems will be
<para>Begin by creating directories on which the file systems will be
mounted:</para>
<screen><userinput>mkdir -pv $LFS/{dev,proc,sys,run}</userinput></screen>
@ -27,20 +29,31 @@
<sect2 id="ch-tools-bindmount">
<title>Mounting and Populating /dev</title>
<para>During a normal boot, the kernel automatically mounts the
<systemitem class="filesystem">devtmpfs</systemitem> filesystem on the
<filename class="directory">/dev</filename> directory, and allow the
devices to be created dynamically on that virtual filesystem as they
are detected or accessed. Device creation is generally done during the
boot process by the kernel and Udev.
Since this new system does not yet have Udev and
has not yet been booted, it is necessary to mount and populate
<filename class="directory">/dev</filename> manually. This is
accomplished by bind mounting the host system's
<para>During a normal boot of the LFS system, the kernel automatically
mounts the <systemitem class="filesystem">devtmpfs</systemitem>
filesystem on the
<filename class="directory">/dev</filename> directory; the kernel
creates device nodes on that virtual filesystem during the boot process
or when a device is first detected or accessed. The udev daemon may
change the owner or permission of the device nodes created by the
kernel, or create new device nodes or symlinks to ease the work of
distro maintainers or system administrators. (See
<xref linkend='ch-config-udev-device-node-creation'/> for details.)
If the host kernel supports &devtmpfs;, we can simply mount a
&devtmpfs; at <filename class='directory'>$LFS/dev</filename> and rely
on the kernel to populate it (the LFS building process does not need
the additional work onto &devtmpfs; by udev daemon).</para>
<para>But, some host kernels may lack &devtmpfs; support and these
host distros maintain the content of
<filename class="directory">/dev</filename> with different methods.
So the only host-agnostic way for populating
<filename class="directory">$LFS/dev</filename> is
bind mounting the host system's
<filename class="directory">/dev</filename> directory. A bind mount is
a special type of mount that allows you to create a mirror of a
directory or mount point to some other location. Use the following
command to achieve this:</para>
directory or mount point at some other location. Use the following
command to do this:</para>
<screen><userinput>mount -v --bind /dev $LFS/dev</userinput></screen>
@ -89,10 +102,10 @@ mount -vt tmpfs tmpfs $LFS/run</userinput></screen>
The /run tmpfs was mounted above so in this case only a
directory needs to be created.</para>
<para>In other cases <filename>/dev/shm</filename> is a mountpoint
<para>In other host systems <filename>/dev/shm</filename> is a mount point
for a tmpfs. In that case the mount of /dev above will only create
/dev/shm in the chroot environment as a directory. In this situation
we explicitly mount a tmpfs,</para>
/dev/shm as a directory in the chroot environment. In this situation
we must explicitly mount a tmpfs:</para>
<screen><userinput>if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)

View File

@ -40,12 +40,13 @@
<sect2 role="installation">
<title>Installation of Autoconf</title>
<!--
<para>First, apply a patch fixes several problems that occur with the latest
perl, libtool, and bash versions.</para>
<screen><userinput remap="pre">patch -Np1 -i ../&autoconf-fixes-patch;</userinput></screen>
-->
<para>First, fix several problems with the tests caused by bash-5.2 and later:</para>
<screen><userinput remap="pre">sed -e 's/SECONDS|/&amp;SHLVL|/' \
-e '/BASH_ARGV=/a\ /^SHLVL=/ d' \
-i.orig tests/local.at</userinput></screen>
<para>Prepare Autoconf for compilation:</para>
<screen><userinput remap="configure">./configure --prefix=/usr</userinput></screen>

View File

@ -11,13 +11,13 @@
<title>Package Management</title>
<para>Package Management is an often requested addition to the LFS Book. A
Package Manager allows tracking the installation of files making it easy to
Package Manager tracks the installation of files, making it easier to
remove and upgrade packages. As well as the binary and library files, a
package manager will handle the installation of configuration files. Before
you begin to wonder, NO&mdash;this section will not talk about nor recommend
any particular package manager. What it provides is a roundup of the more
popular techniques and how they work. The perfect package manager for you may
be among these techniques or may be a combination of two or more of these
be among these techniques, or it may be a combination of two or more of these
techniques. This section briefly mentions issues that may arise when upgrading
packages.</para>
@ -32,14 +32,14 @@
<listitem>
<para>There are multiple solutions for package management, each having
its strengths and drawbacks. Including one that satisfies all audiences
its strengths and drawbacks. Finding one solution that satisfies all audiences
is difficult.</para>
</listitem>
</itemizedlist>
<para>There are some hints written on the topic of package management. Visit
the <ulink url="&hints-root;">Hints Project</ulink> and see if one of them
fits your need.</para>
fits your needs.</para>
<sect2 id='pkgmgmt-upgrade-issues'>
<title>Upgrade Issues</title>
@ -51,18 +51,18 @@
<itemizedlist>
<listitem>
<para>If Linux kernel needs to be upgraded (for example, from
5.10.17 to 5.10.18 or 5.11.1), nothing else need to be rebuilt.
The system will keep working fine thanks to the well-defined border
between kernel and userspace. Specifically, Linux API headers
need not to be (and should not be, see the next item) upgraded
alongside the kernel. You'll need to reboot your system to use the
<para>If the Linux kernel needs to be upgraded (for example, from
5.10.17 to 5.10.18 or 5.11.1), nothing else needs to be rebuilt.
The system will keep working fine thanks to the well-defined interface
between the kernel and user space. Specifically, Linux API headers
need not be (and should not be, see the next item) upgraded
along with the kernel. You will merely need to reboot your system to use the
upgraded kernel.</para>
</listitem>
<listitem>
<para>If Linux API headers or Glibc needs to be upgraded to a newer
version, (e.g. from glibc-2.31 to glibc-2.32), it is safer to
<para>If Linux API headers or glibc need to be upgraded to a newer
version, (e.g., from glibc-2.31 to glibc-2.32), it is safer to
rebuild LFS. Though you <emphasis>may</emphasis> be able to rebuild
all the packages in their dependency order, we do not recommend
it. </para>
@ -70,44 +70,44 @@
<listitem> <para>If a package containing a shared library is updated, and
if the name of the library changes, then any packages dynamically
linked to the library need to be recompiled in order to link against the
linked to the library must be recompiled, to link against the
newer library. (Note that there is no correlation between the package
version and the name of the library.) For example, consider a package
foo-1.2.3 that installs a shared library with name <filename
class='libraryfile'>libfoo.so.1</filename>. If you upgrade the package to
a newer version foo-1.2.4 that installs a shared library with name
foo-1.2.3 that installs a shared library with the name <filename
class='libraryfile'>libfoo.so.1</filename>. Suppose you upgrade the package to
a newer version foo-1.2.4 that installs a shared library with the name
<filename class='libraryfile'>libfoo.so.2</filename>. In this case, any
packages that are dynamically linked to <filename
class='libraryfile'>libfoo.so.1</filename> need to be recompiled to link
against <filename class='libraryfile'>libfoo.so.2</filename> in order to
use the new library version. You should not remove the previous
libraries unless all the dependent packages are recompiled.</para>
use the new library version. You should not remove the old
libraries until all the dependent packages have been recompiled.</para>
</listitem>
<listitem> <para>If a package containing a shared library is updated,
and the name of library doesn't change, but the version number of the
and the name of the library doesn't change, but the version number of the
library <emphasis role="bold">file</emphasis> decreases (for example,
the name of the library is kept named
the library is still named
<filename class='libraryfile'>libfoo.so.1</filename>,
but the name of library file is changed from
but the name of the library file is changed from
<filename class='libraryfile'>libfoo.so.1.25</filename> to
<filename class='libraryfile'>libfoo.so.1.24</filename>),
you should remove the library file from the previously installed version
(<filename class='libraryfile'>libfoo.so.1.25</filename> in the case).
Or, a <command>ldconfig</command> run (by yourself using a command
(<filename class='libraryfile'>libfoo.so.1.25</filename> in this case).
Otherwise, a <command>ldconfig</command> command (invoked by yourself from the command
line, or by the installation of some package) will reset the symlink
<filename class='libraryfile'>libfoo.so.1</filename> to point to
the old library file because it seems having a <quote>newer</quote>
version, as its version number is larger. This situation may happen if
you have to downgrade a package, or the package changes the versioning
scheme of library files suddenly.</para> </listitem>
the old library file because it seems to be a <quote>newer</quote>
version; its version number is larger. This situation may arise if
you have to downgrade a package, or if the authors change the versioning
scheme for library files.</para> </listitem>
<listitem><para>If a package containing a shared library is updated,
and the name of library doesn't change, but a severe issue
and the name of the library doesn't change, but a severe issue
(especially, a security vulnerability) is fixed, all running programs
linked to the shared library should be restarted. The following
command, run as <systemitem class="username">root</systemitem> after
updating, will list what is using the old versions of those libraries
the update is complete, will list which processes are using the old versions of those libraries
(replace <replaceable>libfoo</replaceable> with the name of the
library):</para>
@ -115,33 +115,33 @@
tr -cd 0-9\\n | xargs -r ps u</userinput></screen>
<para>
If <application>OpenSSH</application> is being used for accessing
the system and it is linked to the updated library, you need to
restart <command>sshd</command> service, then logout, login again,
and rerun that command to confirm nothing is still using the
If <application>OpenSSH</application> is being used to access
the system and it is linked to the updated library, you must
restart the <command>sshd</command> service, then logout, login again,
and rerun the preceding ps command to confirm that nothing is still using the
deleted libraries.
</para>
<para revision='systemd'>
If the <command>systemd</command> daemon (running as PID 1) is
linked to the updated library, you can restart it without reboot
linked to the updated library, you can restart it without rebooting
by running <command>systemctl daemon-reexec</command> as the
<systemitem class='username'>root</systemitem> user.
</para></listitem>
<listitem>
<para>If a binary or a shared library is overwritten, the processes
using the code or data in the binary or library may crash. The
correct way to update a binary or a shared library without causing
<para>If an executable program or a shared library is overwritten, the processes
using the code or data in that program or library may crash. The
correct way to update a program or a shared library without causing
the process to crash is to remove it first, then install the new
version into position. The <command>install</command> command
provided by <application>Coreutils</application> has already
implemented this and most packages use it to install binaries and
version. The <command>install</command> command
provided by <application>coreutils</application> has already
implemented this, and most packages use that command to install binary files and
libraries. This means that you won't be troubled by this issue most of the time.
However, the install process of some packages (notably Mozilla JS
in BLFS) just overwrites the file if it exists and causes a crash, so
in BLFS) just overwrites the file if it exists; this causes a crash. So
it's safer to save your work and close unneeded running processes
before updating a package.</para>
before updating a package.</para> <!-- binary is an adjective, not a noun. -->
</listitem>
</itemizedlist>
@ -152,36 +152,36 @@
<para>The following are some common package management techniques. Before
making a decision on a package manager, do some research on the various
techniques, particularly the drawbacks of the particular scheme.</para>
techniques, particularly the drawbacks of each particular scheme.</para>
<sect3>
<title>It is All in My Head!</title>
<para>Yes, this is a package management technique. Some folks do not find
the need for a package manager because they know the packages intimately
and know what files are installed by each package. Some users also do not
<para>Yes, this is a package management technique. Some folks do not
need a package manager because they know the packages intimately
and know which files are installed by each package. Some users also do not
need any package management because they plan on rebuilding the entire
system when a package is changed.</para>
system whenever a package is changed.</para>
</sect3>
<sect3>
<title>Install in Separate Directories</title>
<para>This is a simplistic package management that does not need any extra
package to manage the installations. Each package is installed in a
<para>This is a simplistic package management technique that does not need a
special program to manage the packages. Each package is installed in a
separate directory. For example, package foo-1.1 is installed in
<filename class='directory'>/usr/pkg/foo-1.1</filename>
and a symlink is made from <filename>/usr/pkg/foo</filename> to
<filename class='directory'>/usr/pkg/foo-1.1</filename>. When installing
a new version foo-1.2, it is installed in
<filename class='directory'>/usr/pkg/foo-1.1</filename>. When
a new version foo-1.2 comes along, it is installed in
<filename class='directory'>/usr/pkg/foo-1.2</filename> and the previous
symlink is replaced by a symlink to the new version.</para>
<para>Environment variables such as <envar>PATH</envar>,
<envar>LD_LIBRARY_PATH</envar>, <envar>MANPATH</envar>,
<envar>INFOPATH</envar> and <envar>CPPFLAGS</envar> need to be expanded to
include <filename>/usr/pkg/foo</filename>. For more than a few packages,
include <filename>/usr/pkg/foo</filename>. If you install more than a few packages,
this scheme becomes unmanageable.</para>
</sect3>
@ -190,15 +190,15 @@
<title>Symlink Style Package Management</title>
<para>This is a variation of the previous package management technique.
Each package is installed similar to the previous scheme. But instead of
making the symlink, each file is symlinked into the
Each package is installed as in the previous scheme. But instead of
making the symlink via a generic package name, each file is symlinked into the
<filename class='directory'>/usr</filename> hierarchy. This removes the
need to expand the environment variables. Though the symlinks can be
created by the user to automate the creation, many package managers have
been written using this approach. A few of the popular ones include Stow,
created by the user, many package managers use this approach, and
automate the creation of the symlinks. A few of the popular ones include Stow,
Epkg, Graft, and Depot.</para>
<para>The installation needs to be faked, so that the package thinks that
<para>The installation script needs to be fooled, so the package thinks
it is installed in <filename class="directory">/usr</filename> though in
reality it is installed in the
<filename class="directory">/usr/pkg</filename> hierarchy. Installing in
@ -216,7 +216,7 @@ make install</userinput></screen>
<filename class='libraryfile'>/usr/pkg/libfoo/1.1/lib/libfoo.so.1</filename>
instead of <filename class='libraryfile'>/usr/lib/libfoo.so.1</filename>
as you would expect. The correct approach is to use the
<envar>DESTDIR</envar> strategy to fake installation of the package. This
<envar>DESTDIR</envar> variable to direct the installation. This
approach works as follows:</para>
<screen role="nodump"><userinput>./configure --prefix=/usr
@ -224,8 +224,8 @@ make
make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
<para>Most packages support this approach, but there are some which do not.
For the non-compliant packages, you may either need to manually install the
package, or you may find that it is easier to install some problematic
For the non-compliant packages, you may either need to install the
package manually, or you may find that it is easier to install some problematic
packages into <filename class='directory'>/opt</filename>.</para>
</sect3>
@ -237,14 +237,14 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
the package. After the installation, a simple use of the
<command>find</command> command with the appropriate options can generate
a log of all the files installed after the timestamp file was created. A
package manager written with this approach is install-log.</para>
package manager that uses this approach is install-log.</para>
<para>Though this scheme has the advantage of being simple, it has two
drawbacks. If, during installation, the files are installed with any
timestamp other than the current time, those files will not be tracked by
the package manager. Also, this scheme can only be used when one package
is installed at a time. The logs are not reliable if two packages are
being installed on two different consoles.</para>
the package manager. Also, this scheme can only be used when packages
are installed one at a time. The logs are not reliable if two packages are
installed simultaneously from two different consoles.</para>
</sect3>
@ -262,12 +262,12 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
calls that modify the filesystem. For this approach to work, all the
executables need to be dynamically linked without the suid or sgid bit.
Preloading the library may cause some unwanted side-effects during
installation. Therefore, it is advised that one performs some tests to
ensure that the package manager does not break anything and logs all the
installation. Therefore, it's a good idea to perform some tests to
ensure that the package manager does not break anything, and that it logs all the
appropriate files.</para>
<para>The second technique is to use <command>strace</command>, which
logs all system calls made during the execution of the installation
<para>Another technique is to use <command>strace</command>, which
logs all the system calls made during the execution of the installation
scripts.</para>
</sect3>
@ -275,10 +275,10 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
<title>Creating Package Archives</title>
<para>In this scheme, the package installation is faked into a separate
tree as described in the Symlink style package management. After the
tree as previously described in the symlink style package management section. After the
installation, a package archive is created using the installed files.
This archive is then used to install the package either on the local
machine or can even be used to install the package on other machines.</para>
This archive is then used to install the package on the local
machine or even on other machines.</para>
<para>This approach is used by most of the package managers found in the
commercial distributions. Examples of package managers that follow this
@ -289,10 +289,10 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
package management for LFS systems is located at <ulink
url="&hints-root;fakeroot.txt"/>.</para>
<para>Creation of package files that include dependency information is
complex and is beyond the scope of LFS.</para>
<para>The creation of package files that include dependency information is
complex, and beyond the scope of LFS.</para>
<para>Slackware uses a <command>tar</command> based system for package
<para>Slackware uses a <command>tar</command>-based system for package
archives. This system purposely does not handle package dependencies
as more complex package managers do. For details of Slackware package
management, see <ulink
@ -322,8 +322,8 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
another computer with the same architecture as the base system is as
simple as using <command>tar</command> on the LFS partition that contains
the root directory (about 250MB uncompressed for a base LFS build), copying
that file via network transfer or CD-ROM to the new system and expanding
it. From that point, a few configuration files will have to be changed.
that file via network transfer or CD-ROM / USB stick to the new system, and expanding
it. After that, a few configuration files will have to be changed.
Configuration files that may need to be updated include:
<filename>/etc/hosts</filename>,
<filename>/etc/fstab</filename>,
@ -342,17 +342,17 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
</phrase>
</para>
<para>A custom kernel may need to be built for the new system depending on
<para>A custom kernel may be needed for the new system, depending on
differences in system hardware and the original kernel
configuration.</para>
<note><para>There have been some reports of issues when copying between
similar but not identical architectures. For instance, the instruction set
for an Intel system is not identical with an AMD processor and later
versions of some processors may have instructions that are unavailable in
for an Intel system is not identical with the AMD processor's instructions, and later
versions of some processors may provide instructions that are unavailable with
earlier versions.</para></note>
<para>Finally the new system has to be made bootable via <xref
<para>Finally, the new system has to be made bootable via <xref
linkend="ch-bootable-grub"/>.</para>
</sect2>

View File

@ -93,7 +93,7 @@
</sect3>
<sect3>
<sect3 id='ch-config-udev-device-node-creation'>
<title>Device Node Creation</title>
<para>Device files are created by the kernel by the <systemitem

View File

@ -107,6 +107,7 @@
<para>Then unmount the virtual file systems:</para>
<screen><userinput>umount -v $LFS/dev/pts
mountpoint -q $LFS/dev/shm &amp;&amp; umount $LFS/dev/shm
umount -v $LFS/dev
umount -v $LFS/run
umount -v $LFS/proc

View File

@ -121,8 +121,12 @@
<!ENTITY root "<systemitem class='username'>root</systemitem>">
<!ENTITY lfs-user "<systemitem class='username'>lfs</systemitem>">
<!ENTITY devtmpfs "<systemitem class='filesystem'>devtmpfs</systemitem>">
<!ENTITY fstab "<filename>/etc/fstab</filename>">
<!ENTITY boot-dir "<filename class='directory'>/boot</filename>">
<!ENTITY ch-final "<xref linkend='chapter-building-system'/>">
<!ENTITY ch-tmp-cross "<xref linkend='chapter-temporary-tools'/>">
<!ENTITY ch-tmp-chroot "<xref linkend='chapter-chroot-temporary-tools'/>">
<!ENTITY % packages-entities SYSTEM "packages.ent">
%packages-entities;

View File

@ -48,20 +48,20 @@
<!ENTITY automake-fin-du "116 MB">
<!ENTITY automake-fin-sbu "less than 0.1 SBU (about 7.7 SBU with tests)">
<!ENTITY bash-version "5.1.16">
<!ENTITY bash-size "10,277 KB">
<!ENTITY bash-version "5.2">
<!ENTITY bash-size "10,695 KB">
<!ENTITY bash-url "&gnu;bash/bash-&bash-version;.tar.gz">
<!ENTITY bash-md5 "c17b20a09fc38d67fb303aeb6c130b4e">
<!ENTITY bash-md5 "cfb4cf795fc239667f187b3d6b3d396f">
<!ENTITY bash-home "&gnu-software;bash/">
<!ENTITY bash-tmp-du "64 MB">
<!ENTITY bash-tmp-sbu "0.5 SBU">
<!ENTITY bash-fin-du "50 MB">
<!ENTITY bash-fin-sbu "1.4 SBU">
<!ENTITY bc-version "6.0.2">
<!ENTITY bc-version "6.0.4">
<!ENTITY bc-size "442 KB">
<!ENTITY bc-url "https://github.com/gavinhoward/bc/releases/download/&bc-version;/bc-&bc-version;.tar.xz">
<!ENTITY bc-md5 "101e62dd9c2b90bf18c38d858aa36f0d">
<!ENTITY bc-md5 "1e1c90de1a11f3499237425de1673ef1">
<!ENTITY bc-home "https://git.yzena.com/gavin/bc">
<!ENTITY bc-fin-du "7.4 MB">
<!ENTITY bc-fin-sbu "less than 0.1 SBU">
@ -114,10 +114,10 @@
<!ENTITY coreutils-fin-du "159 MB">
<!ENTITY coreutils-fin-sbu "2.8 SBU">
<!ENTITY dbus-version "1.14.0">
<!ENTITY dbus-version "1.14.2">
<!ENTITY dbus-size "1,332 KB">
<!ENTITY dbus-url "https://dbus.freedesktop.org/releases/dbus/dbus-&dbus-version;.tar.xz">
<!ENTITY dbus-md5 "ddd5570aff05191dbee8e42d751f1b7d">
<!ENTITY dbus-md5 "2d9a6b441e6f844d41c35a004f0ef50b">
<!ENTITY dbus-home "https://www.freedesktop.org/wiki/Software/dbus">
<!ENTITY dbus-fin-du "19 MB">
<!ENTITY dbus-fin-sbu "0.2 SBU">
@ -317,10 +317,10 @@
<!ENTITY gzip-fin-du "21 MB">
<!ENTITY gzip-fin-sbu "0.3 SBU">
<!ENTITY iana-etc-version "20220812">
<!ENTITY iana-etc-version "20220922">
<!ENTITY iana-etc-size "584 KB">
<!ENTITY iana-etc-url "https://github.com/Mic92/iana-etc/releases/download/&iana-etc-version;/iana-etc-&iana-etc-version;.tar.gz">
<!ENTITY iana-etc-md5 "851a53efd53c77d0ad7b3d2b68d8a3fc">
<!ENTITY iana-etc-md5 "2fdc746cfc1bc10f841760fd6a92618c">
<!ENTITY iana-etc-home "https://www.iana.org/protocols">
<!ENTITY iana-etc-fin-du "4.8 MB">
<!ENTITY iana-etc-fin-sbu "less than 0.1 SBU">
@ -390,18 +390,18 @@
<!ENTITY lfs-bootscripts-cfg-du "BOOTSCRIPTS-INSTALL-KB KB">
<!ENTITY lfs-bootscripts-cfg-sbu "less than 0.1 SBU">
<!ENTITY libcap-version "2.65">
<!ENTITY libcap-size "176 KB">
<!ENTITY libcap-version "2.66">
<!ENTITY libcap-size "178 KB">
<!ENTITY libcap-url "&kernel;linux/libs/security/linux-privs/libcap2/libcap-&libcap-version;.tar.xz">
<!ENTITY libcap-md5 "3543e753dd941255c4def6cc67a462bb">
<!ENTITY libcap-md5 "00afd6e13bc94b2543b1a70770bdb41f">
<!ENTITY libcap-home "https://sites.google.com/site/fullycapable/">
<!ENTITY libcap-fin-du "2.7 MB">
<!ENTITY libcap-fin-sbu "less than 0.1 SBU">
<!ENTITY libffi-version "3.4.2">
<!ENTITY libffi-size "1,320 KB">
<!ENTITY libffi-version "3.4.3">
<!ENTITY libffi-size "1,327 KB">
<!ENTITY libffi-url "https://github.com/libffi/libffi/releases/download/v&libffi-version;/libffi-&libffi-version;.tar.gz">
<!ENTITY libffi-md5 "294b921e6cf9ab0fbaea4b639f8fdbe8">
<!ENTITY libffi-md5 "b57b0ac1d1072681cee9148a417bd2ec">
<!ENTITY libffi-home "https://sourceware.org/libffi/">
<!ENTITY libffi-fin-du "10 MB">
<!ENTITY libffi-fin-sbu "1.8 SBU">
@ -424,12 +424,12 @@
<!ENTITY linux-major-version "5">
<!ENTITY linux-minor-version "19">
<!ENTITY linux-patch-version "8">
<!ENTITY linux-patch-version "12">
<!--<!ENTITY linux-version "&linux-major-version;.&linux-minor-version;">-->
<!ENTITY linux-version "&linux-major-version;.&linux-minor-version;.&linux-patch-version;">
<!ENTITY linux-size "128,547 KB">
<!ENTITY linux-size "128,599 KB">
<!ENTITY linux-url "&kernel;linux/kernel/v&linux-major-version;.x/linux-&linux-version;.tar.xz">
<!ENTITY linux-md5 "ae08d14f9b7ed3d47c0d22b6d235507a">
<!ENTITY linux-md5 "6a8c953d04986027b033bc92185745bf">
<!ENTITY linux-home "https://www.kernel.org/">
<!-- measured for 5.13.4 / gcc-11.1.0 on x86_64 : minimum is
allnoconfig rounded down to allow for ongoing cleanups,
@ -602,11 +602,11 @@
<!ENTITY python-docs-md5 "d5923c417995334e72c2561812905d23">
<!ENTITY python-docs-size "7,176 KB">
<!ENTITY readline-version "8.1.2">
<!ENTITY readline-soversion "8.1"><!-- used for stripping -->
<!ENTITY readline-size "2,923 KB">
<!ENTITY readline-version "8.2">
<!ENTITY readline-soversion "8.2"><!-- used for stripping -->
<!ENTITY readline-size "2,973 KB">
<!ENTITY readline-url "&gnu;readline/readline-&readline-version;.tar.gz">
<!ENTITY readline-md5 "12819fa739a78a6172400f399ab34f81">
<!ENTITY readline-md5 "4aa1b31be779e6b84f9a96cb66bc50f6">
<!ENTITY readline-home "https://tiswww.case.edu/php/chet/readline/rltop.html">
<!ENTITY readline-fin-du "15 MB">
<!ENTITY readline-fin-sbu "0.1 SBU">
@ -694,10 +694,10 @@
<!ENTITY texinfo-fin-du "114 MB">
<!ENTITY texinfo-fin-sbu "0.6 SBU">
<!ENTITY tzdata-version "2022c">
<!ENTITY tzdata-size "423 KB">
<!ENTITY tzdata-version "2022d">
<!ENTITY tzdata-size "424 KB">
<!ENTITY tzdata-url "https://www.iana.org/time-zones/repository/releases/tzdata&tzdata-version;.tar.gz">
<!ENTITY tzdata-md5 "4e3b2369b68e713ba5d3f7456f20bfdb">
<!ENTITY tzdata-md5 "e55dbeb2121230a0ae7c58dbb47ae8c8">
<!ENTITY tzdata-home "https://www.iana.org/time-zones">
<!ENTITY udev-lfs-version "udev-lfs-20171102">

View File

@ -11,29 +11,29 @@
<title>General Compilation Instructions</title>
<para>When building packages there are several assumptions made within
the instructions:</para>
<para>Here are some things you should know about building each package:</para>
<itemizedlist>
<listitem>
<para>Several of the packages are patched before compilation, but only when
<para>Several packages are patched before compilation, but only when
the patch is needed to circumvent a problem. A patch is often needed in
both this and the following chapters, but sometimes in only one location.
both the current and the following chapters, but sometimes, when the same package
is built more than once, the patch is not needed right away.
Therefore, do not be concerned if instructions for a downloaded patch seem
to be missing. Warning messages about <emphasis>offset</emphasis> or
<emphasis>fuzz</emphasis> may also be encountered when applying a patch. Do
not worry about these warnings, as the patch was still successfully
not worry about these warnings; the patch was still successfully
applied.</para>
</listitem>
<listitem>
<para>During the compilation of most packages, there will be several
warnings that scroll by on the screen. These are normal and can safely be
ignored. These warnings are as they appear&mdash;warnings about
<para>During the compilation of most packages, some
warnings will scroll by on the screen. These are normal and can safely be
ignored. These warnings are usually about
deprecated, but not invalid, use of the C or C++ syntax. C standards change
fairly often, and some packages still use the older standard. This is not a
problem, but does prompt the warning.</para>
fairly often, and some packages have not yet been updated. This is not a
serious problem, but it does cause the warnings to appear.</para>
</listitem>
<listitem>
@ -69,25 +69,25 @@
symbolic link to <command>gawk</command>.</para></listitem>
<listitem override='bullet'><para><command>/usr/bin/yacc</command> is a
symbolic link to <command>bison</command> or a small script that
symbolic link to <command>bison</command>, or to a small script that
executes bison.</para></listitem>
</itemizedlist>
</important>
<important>
<para>To re-emphasize the build process:</para>
<para>Here is a synopsis of the build process.</para>
<orderedlist numeration="arabic" spacing="compact">
<listitem>
<para>Place all the sources and patches in a directory that will be
accessible from the chroot environment such as
accessible from the chroot environment, such as
<filename class="directory">/mnt/lfs/sources/</filename>.<!-- Do
<emphasis>not</emphasis> put sources in
<filename class="directory">/mnt/lfs/tools/</filename>. --></para>
</listitem>
<listitem>
<para>Change to the sources directory.</para>
<para>Change to the <filename class="directory">/mnt/lfs/sources/</filename> directory.</para>
</listitem>
<listitem id='buildinstr' xreflabel='Package build instructions'>
<para>For each package:</para>
@ -97,22 +97,21 @@
to be built. In <xref linkend="chapter-cross-tools"/> and
<xref linkend="chapter-temporary-tools"/>, ensure you are
the <emphasis>lfs</emphasis> user when extracting the package.</para>
<para>All methods to get the source code tree being built
in-position, except extracting the package tarball, are not
supported. Notably, using <command>cp -R</command> to copy the
<para>Do not use any method except the <command>tar</command> command
to extract the source code. Notably, using the <command>cp -R</command>
command to copy the
source code tree somewhere else can destroy links and
timestamps in the sources tree and cause building
failure.</para>
timestamps in the source tree, and cause the build to fail.</para>
</listitem>
<listitem>
<para>Change to the directory created when the package was
extracted.</para>
</listitem>
<listitem>
<para>Follow the book's instructions for building the package.</para>
<para>Follow the instructions for building the package.</para>
</listitem>
<listitem>
<para>Change back to the sources directory.</para>
<para>Change back to the sources directory when the build is complete.</para>
</listitem>
<listitem>
<para>Delete the extracted source directory unless instructed otherwise.</para>

View File

@ -10,25 +10,25 @@
<title>Introduction</title>
<para>This part is divided into three stages: first building a cross
compiler and its associated libraries; second, use this cross toolchain
<para>This part is divided into three stages: first, building a cross
compiler and its associated libraries; second, using this cross toolchain
to build several utilities in a way that isolates them from the host
distribution; third, enter the chroot environment, which further improves
host isolation, and build the remaining tools needed to build the final
distribution; and third, entering the chroot environment (which further improves
host isolation) and constructing the remaining tools needed to build the final
system.</para>
<important><para>With this part begins the real work of building a new
system. It requires much care in ensuring that the instructions are
followed exactly as the book shows them. You should try to understand
what they do, and whatever your eagerness to finish your build, you should
refrain from blindly type them as shown, but rather read documentation when
<important><para>This is where the real work of building a new system
begins. Be very careful to follow the instructions exactly as the book
shows them. You should try to understand what each command does,
and no matter how eager you are to finish your build, you should
refrain from blindly typing the commands as shown. Read the documentation when
there is something you do not understand. Also, keep track of your typing
and of the output of commands, by sending them to a file, using the
<command>tee</command> utility. This allows for better diagnosing
if something gets wrong.</para></important>
and of the output of commands, by using the <command>tee</command> utility
to send the terminal output to a file. This makes debugging easier
if something goes wrong.</para></important>
<para>The next section gives a technical introduction to the build process,
while the following one contains <emphasis role="strong">very
<para>The next section is a technical introduction to the build process,
while the following one presents <emphasis role="strong">very
important</emphasis> general instructions.</para>
</sect1>

View File

@ -11,26 +11,26 @@
<title>Toolchain Technical Notes</title>
<para>This section explains some of the rationale and technical details
behind the overall build method. It is not essential to immediately
behind the overall build method. Don't try to immediately
understand everything in this section. Most of this information will be
clearer after performing an actual build. This section can be referred
to at any time during the process.</para>
clearer after performing an actual build. Come back and re-read this chapter
at any time during the build process.</para>
<para>The overall goal of <xref linkend="chapter-cross-tools"/> and <xref
linkend="chapter-temporary-tools"/> is to produce a temporary area that
contains a known-good set of tools that can be isolated from the host system.
By using <command>chroot</command>, the commands in the remaining chapters
will be contained within that environment, ensuring a clean, trouble-free
linkend="chapter-temporary-tools"/> is to produce a temporary area
containing a set of tools that are known to be good, and that are isolated from the host system.
By using the <command>chroot</command> command, the compilations in the remaining chapters
will be isolated within that environment, ensuring a clean, trouble-free
build of the target LFS system. The build process has been designed to
minimize the risks for new readers and to provide the most educational value
minimize the risks for new readers, and to provide the most educational value
at the same time.</para>
<para>The build process is based on the process of
<para>This build process is based on
<emphasis>cross-compilation</emphasis>. Cross-compilation is normally used
for building a compiler and its toolchain for a machine different from
the one that is used for the build. This is not strictly needed for LFS,
to build a compiler and its associated toolchain for a machine different from
the one that is used for the build. This is not strictly necessary for LFS,
since the machine where the new system will run is the same as the one
used for the build. But cross-compilation has the great advantage that
used for the build. But cross-compilation has one great advantage:
anything that is cross-compiled cannot depend on the host environment.</para>
<sect2 id="cross-compile" xreflabel="About Cross-Compilation">
@ -39,47 +39,46 @@
<note>
<para>
The LFS book is not, and does not contain a general tutorial to
build a cross (or native) toolchain. Don't use the command in the
book for a cross toolchain which will be used for some purpose other
The LFS book is not (and does not contain) a general tutorial to
build a cross (or native) toolchain. Don't use the commands in the
book for a cross toolchain for some purpose other
than building LFS, unless you really understand what you are doing.
</para>
</note>
<para>Cross-compilation involves some concepts that deserve a section on
their own. Although this section may be omitted in a first reading,
coming back to it later will be beneficial to your full understanding of
<para>Cross-compilation involves some concepts that deserve a section of
their own. Although this section may be omitted on a first reading,
coming back to it later will help you gain a fuller understanding of
the process.</para>
<para>Let us first define some terms used in this context:</para>
<para>Let us first define some terms used in this context.</para>
<variablelist>
<varlistentry><term>build</term><listitem>
<varlistentry><term>The build</term><listitem>
<para>is the machine where we build programs. Note that this machine
is referred to as the <quote>host</quote> in other
sections.</para></listitem>
is also referred to as the <quote>host</quote>.</para></listitem>
</varlistentry>
<varlistentry><term>host</term><listitem>
<varlistentry><term>The host</term><listitem>
<para>is the machine/system where the built programs will run. Note
that this use of <quote>host</quote> is not the same as in other
sections.</para></listitem>
</varlistentry>
<varlistentry><term>target</term><listitem>
<varlistentry><term>The target</term><listitem>
<para>is only used for compilers. It is the machine the compiler
produces code for. It may be different from both build and
host.</para></listitem>
produces code for. It may be different from both the build and
the host.</para></listitem>
</varlistentry>
</variablelist>
<para>As an example, let us imagine the following scenario (sometimes
referred to as <quote>Canadian Cross</quote>): we may have a
referred to as <quote>Canadian Cross</quote>): we have a
compiler on a slow machine only, let's call it machine A, and the compiler
ccA. We may have also a fast machine (B), but with no compiler, and we may
want to produce code for another slow machine (C). To build a
compiler for machine C, we would have three stages:</para>
ccA. We also have a fast machine (B), but no compiler for (B), and we
want to produce code for a third, slow machine (C). We will build a
compiler for machine C in three stages.</para>
<informaltable align="center">
<tgroup cols="5">
@ -95,24 +94,24 @@
<tbody>
<row>
<entry>1</entry><entry>A</entry><entry>A</entry><entry>B</entry>
<entry>build cross-compiler cc1 using ccA on machine A</entry>
<entry>Build cross-compiler cc1 using ccA on machine A.</entry>
</row>
<row>
<entry>2</entry><entry>A</entry><entry>B</entry><entry>C</entry>
<entry>build cross-compiler cc2 using cc1 on machine A</entry>
<entry>Build cross-compiler cc2 using cc1 on machine A.</entry>
</row>
<row>
<entry>3</entry><entry>B</entry><entry>C</entry><entry>C</entry>
<entry>build compiler ccC using cc2 on machine B</entry>
<entry>Build compiler ccC using cc2 on machine B.</entry>
</row>
</tbody>
</tgroup>
</informaltable>
<para>Then, all the other programs needed by machine C can be compiled
<para>Then, all the programs needed by machine C can be compiled
using cc2 on the fast machine B. Note that unless B can run programs
produced for C, there is no way to test the built programs until machine
C itself is running. For example, for testing ccC, we may want to add a
produced for C, there is no way to test the newly built programs until machine
C itself is running. For example, to run a test suite on ccC, we may want to add a
fourth stage:</para>
<informaltable align="center">
@ -129,7 +128,7 @@
<tbody>
<row>
<entry>4</entry><entry>C</entry><entry>C</entry><entry>C</entry>
<entry>rebuild and test ccC using itself on machine C</entry>
<entry>Rebuild and test ccC using ccC on machine C.</entry>
</row>
</tbody>
</tgroup>
@ -146,44 +145,62 @@
<title>Implementation of Cross-Compilation for LFS</title>
<note>
<para>Almost all the build systems use names of the form
cpu-vendor-kernel-os referred to as the machine triplet. An astute
reader may wonder why a <quote>triplet</quote> refers to a four component
name. The reason is history: initially, three component names were enough
to designate a machine unambiguously, but with new machines and systems
appearing, that proved insufficient. The word <quote>triplet</quote>
remained. A simple way to determine your machine triplet is to run
the <command>config.guess</command>
<para>All packages involved with cross compilation in the book use an
autoconf-based building system. The autoconf-based building system
accepts system types in the form cpu-vendor-kernel-os,
referred to as the system triplet. Since the vendor field is mostly
irrelevant, autoconf allows to omit it. An astute reader may wonder
why a <quote>triplet</quote> refers to a four component name. The
reason is the kernel field and the os field originated from one
<quote>system</quote> field. Such a three-field form is still valid
today for some systems, for example
<literal>x86_64-unknown-freebsd</literal>. But for other systems,
two systems can share the same kernel but still be too different to
use a same triplet for them. For example, an Android running on a
mobile phone is completely different from Ubuntu running on an ARM64
server, despite they are running on the same type of CPU (ARM64) and
using the same kernel (Linux).
Without an emulation layer, you cannot run an
executable for the server on the mobile phone or vice versa. So the
<quote>system</quote> field is separated into kernel and os fields to
designate these systems unambiguously. For our example, the Android
system is designated <literal>aarch64-unknown-linux-android</literal>,
and the Ubuntu system is designated
<literal>aarch64-unknown-linux-gnu</literal>. The word
<quote>triplet</quote> remained. A simple way to determine your
system triplet is to run the <command>config.guess</command>
script that comes with the source for many packages. Unpack the binutils
sources and run the script: <userinput>./config.guess</userinput> and note
the output. For example, for a 32-bit Intel processor the
output will be <emphasis>i686-pc-linux-gnu</emphasis>. On a 64-bit
system it will be <emphasis>x86_64-pc-linux-gnu</emphasis>.</para>
system it will be <emphasis>x86_64-pc-linux-gnu</emphasis>. On most
Linux systems the even simpler <command>gcc -dumpmachine</command> command
will give you similar information.</para>
<para>Also be aware of the name of the platform's dynamic linker, often
<para>You should also be aware of the name of the platform's dynamic linker, often
referred to as the dynamic loader (not to be confused with the standard
linker <command>ld</command> that is part of binutils). The dynamic linker
provided by Glibc finds and loads the shared libraries needed by a
provided by package glibc finds and loads the shared libraries needed by a
program, prepares the program to run, and then runs it. The name of the
dynamic linker for a 32-bit Intel machine is <filename
class="libraryfile">ld-linux.so.2</filename> and is <filename
class="libraryfile">ld-linux-x86-64.so.2</filename> for 64-bit systems. A
class="libraryfile">ld-linux.so.2</filename>; it's <filename
class="libraryfile">ld-linux-x86-64.so.2</filename> on 64-bit systems. A
sure-fire way to determine the name of the dynamic linker is to inspect a
random binary from the host system by running: <userinput>readelf -l
&lt;name of binary&gt; | grep interpreter</userinput> and noting the
output. The authoritative reference covering all platforms is in the
<filename>shlib-versions</filename> file in the root of the Glibc source
<filename>shlib-versions</filename> file in the root of the glibc source
tree.</para>
</note>
<para>In order to fake a cross compilation in LFS, the name of the host triplet
is slightly adjusted by changing the &quot;vendor&quot; field in the
<envar>LFS_TGT</envar> variable. We also use the
<envar>LFS_TGT</envar> variable so it says &quot;lfs&quot;. We also use the
<parameter>--with-sysroot</parameter> option when building the cross linker and
cross compiler to tell them where to find the needed host files. This
ensures that none of the other programs built in <xref
linkend="chapter-temporary-tools"/> can link to libraries on the build
machine. Only two stages are mandatory, and one more for tests:</para>
machine. Only two stages are mandatory, plus one more for tests.</para>
<informaltable align="center">
<tgroup cols="5">
@ -199,47 +216,63 @@
<tbody>
<row>
<entry>1</entry><entry>pc</entry><entry>pc</entry><entry>lfs</entry>
<entry>build cross-compiler cc1 using cc-pc on pc</entry>
<entry>Build cross-compiler cc1 using cc-pc on pc.</entry>
</row>
<row>
<entry>2</entry><entry>pc</entry><entry>lfs</entry><entry>lfs</entry>
<entry>build compiler cc-lfs using cc1 on pc</entry>
<entry>Build compiler cc-lfs using cc1 on pc.</entry>
</row>
<row>
<entry>3</entry><entry>lfs</entry><entry>lfs</entry><entry>lfs</entry>
<entry>rebuild and test cc-lfs using itself on lfs</entry>
<entry>Rebuild and test cc-lfs using cc-lfs on lfs.</entry>
</row>
</tbody>
</tgroup>
</informaltable>
<para>In the above table, <quote>on pc</quote> means the commands are run
<para>In the preceding table, <quote>on pc</quote> means the commands are run
on a machine using the already installed distribution. <quote>On
lfs</quote> means the commands are run in a chrooted environment.</para>
<para>Now, there is more about cross-compiling: the C language is not
just a compiler, but also defines a standard library. In this book, the
GNU C library, named glibc, is used. This library must
be compiled for the lfs machine, that is, using the cross compiler cc1.
GNU C library, named glibc, is used (there is an alternative, &quot;musl&quot;). This library must
be compiled for the LFS machine; that is, using the cross compiler cc1.
But the compiler itself uses an internal library implementing complex
instructions not available in the assembler instruction set. This
internal library is named libgcc, and must be linked to the glibc
subroutines for functions not available in the assembler instruction set. This
internal library is named libgcc, and it must be linked to the glibc
library to be fully functional! Furthermore, the standard library for
C++ (libstdc++) also needs being linked to glibc. The solution to this
chicken and egg problem is to first build a degraded cc1 based libgcc,
lacking some functionalities such as threads and exception handling, then
build glibc using this degraded compiler (glibc itself is not
degraded), then build libstdc++. But this last library will lack the
same functionalities as libgcc.</para>
C++ (libstdc++) must also be linked with glibc. The solution to this
chicken and egg problem is first to build a degraded cc1-based libgcc,
lacking some functionalities such as threads and exception handling, and then
to build glibc using this degraded compiler (glibc itself is not
degraded), and also to build libstdc++. This last library will lack some of the
functionality of libgcc.</para>
<para>This is not the end of the story: the conclusion of the preceding
<para>This is not the end of the story: the upshot of the preceding
paragraph is that cc1 is unable to build a fully functional libstdc++, but
this is the only compiler available for building the C/C++ libraries
during stage 2! Of course, the compiler built during stage 2, cc-lfs,
would be able to build those libraries, but (1) the build system of
GCC does not know that it is usable on pc, and (2) using it on pc
would be at risk of linking to the pc libraries, since cc-lfs is a native
compiler. So we have to build libstdc++ later, in chroot.</para>
gcc does not know that it is usable on pc, and (2) using it on pc
would create a risk of linking to the pc libraries, since cc-lfs is a native
compiler. So we have to re-build libstdc++ later as a part of
gcc stage 2.</para>
<para>In &ch-final; (or <quote>stage 3</quote>), all packages needed for
the LFS system are built. Even if a package is already installed into
the LFS system in a previous chapter, we still rebuild the package
unless we are completely sure it's unnecessary. The main reason for
rebuilding these packages is to settle them down: if we reinstall a LFS
package on a complete LFS system, the installed content of the package
should be same as the content of the same package installed in
&ch-final;. The temporary packages installed in &ch-tmp-cross; or
&ch-tmp-chroot; cannot satisify this expectation because some of them
are built without optional dependencies installed, and autoconf cannot
perform some feature checks in &ch-tmp-cross; because of cross
compilation, causing the temporary packages to lack optional features
or use suboptimal code routines. Additionally, a minor reason for
rebuilding the packages is allowing to run the testsuite.</para>
</sect2>
@ -252,10 +285,10 @@
be part of the final system.</para>
<para>Binutils is installed first because the <command>configure</command>
runs of both GCC and Glibc perform various feature tests on the assembler
runs of both gcc and glibc perform various feature tests on the assembler
and linker to determine which software features to enable or disable. This
is more important than one might first realize. An incorrectly configured
GCC or Glibc can result in a subtly broken toolchain, where the impact of
is more important than one might realize at first. An incorrectly configured
gcc or glibc can result in a subtly broken toolchain, where the impact of
such breakage might not show up until near the end of the build of an
entire distribution. A test suite failure will usually highlight this error
before too much additional work is performed.</para>
@ -274,14 +307,14 @@
<command>$LFS_TGT-gcc dummy.c -Wl,--verbose 2&gt;&amp;1 | grep succeeded</command>
will show all the files successfully opened during the linking.</para>
<para>The next package installed is GCC. An example of what can be
<para>The next package installed is gcc. An example of what can be
seen during its run of <command>configure</command> is:</para>
<screen><computeroutput>checking what assembler to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/as
checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld</computeroutput></screen>
<para>This is important for the reasons mentioned above. It also
demonstrates that GCC's configure script does not search the PATH
demonstrates that gcc's configure script does not search the PATH
directories to find which tools to use. However, during the actual
operation of <command>gcc</command> itself, the same search paths are not
necessarily used. To find out which standard linker <command>gcc</command>
@ -295,12 +328,12 @@ checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld</compute
order.</para>
<para>Next installed are sanitized Linux API headers. These allow the
standard C library (Glibc) to interface with features that the Linux
standard C library (glibc) to interface with features that the Linux
kernel will provide.</para>
<para>The next package installed is Glibc. The most important
considerations for building Glibc are the compiler, binary tools, and
kernel headers. The compiler is generally not an issue since Glibc will
<para>The next package installed is glibc. The most important
considerations for building glibc are the compiler, binary tools, and
kernel headers. The compiler is generally not an issue since glibc will
always use the compiler relating to the <parameter>--host</parameter>
parameter passed to its configure script; e.g. in our case, the compiler
will be <command>$LFS_TGT-gcc</command>. The binary tools and kernel
@ -313,30 +346,31 @@ checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld</compute
<envar>$LFS_TGT</envar> expanded) to control which binary tools are used
and the use of the <parameter>-nostdinc</parameter> and
<parameter>-isystem</parameter> flags to control the compiler's include
search path. These items highlight an important aspect of the Glibc
search path. These items highlight an important aspect of the glibc
package&mdash;it is very self-sufficient in terms of its build machinery
and generally does not rely on toolchain defaults.</para>
<para>As said above, the standard C++ library is compiled next, followed in
<xref linkend="chapter-temporary-tools"/> by all the programs that need
themselves to be built. The install step of all those packages uses the
<envar>DESTDIR</envar> variable to have the
programs land into the LFS filesystem.</para>
<para>As mentioned above, the standard C++ library is compiled next, followed in
<xref linkend="chapter-temporary-tools"/> by other programs that need
to be cross compiled for breaking circular dependencies at build time.
The install step of all those packages uses the
<envar>DESTDIR</envar> variable to force installation
in the LFS filesystem.</para>
<para>At the end of <xref linkend="chapter-temporary-tools"/> the native
lfs compiler is installed. First binutils-pass2 is built,
with the same <envar>DESTDIR</envar> install as the other programs,
then the second pass of GCC is constructed, omitting libstdc++
and other non-important libraries. Due to some weird logic in GCC's
LFS compiler is installed. First binutils-pass2 is built,
in the same <envar>DESTDIR</envar> directory as the other programs,
then the second pass of gcc is constructed, omitting some
non-critical libraries. Due to some weird logic in gcc's
configure script, <envar>CC_FOR_TARGET</envar> ends up as
<command>cc</command> when the host is the same as the target, but is
<command>cc</command> when the host is the same as the target, but
different from the build system. This is why
<parameter>CC_FOR_TARGET=$LFS_TGT-gcc</parameter> is put explicitly into
the configure options.</para>
<parameter>CC_FOR_TARGET=$LFS_TGT-gcc</parameter> is declared explicitly
as one of the configuration options.</para>
<para>Upon entering the chroot environment in <xref
linkend="chapter-chroot-temporary-tools"/>, the first task is to install
libstdc++. Then temporary installations of programs needed for the proper
linkend="chapter-chroot-temporary-tools"/>,
the temporary installations of programs needed for the proper
operation of the toolchain are performed. From this point onwards, the
core toolchain is self-contained and self-hosted. In
<xref linkend="chapter-building-system"/>, final versions of all the