Automatic merge of trunk into multilib

This commit is contained in:
Thomas Trepl (Moody) 2022-10-02 00:30:12 +02:00
commit c093e6b9d5
48 changed files with 989 additions and 514 deletions

View File

@ -26,7 +26,7 @@ ifeq ($(REV), sysv)
BASEDIR ?= ~/lfs-book
PDF_OUTPUT ?= LFS-BOOK.pdf
NOCHUNKS_OUTPUT ?= LFS-BOOK.html
DUMPDIR ?= ~/cross-lfs-commands
DUMPDIR ?= ~/lfs-commands
else
BASEDIR ?= ~/lfs-systemd
PDF_OUTPUT ?= LFS-SYSD-BOOK.pdf
@ -212,22 +212,17 @@ $(BASEDIR)/md5sums: stylesheets/wget-list.xsl chapter03/chapter03.xml \
version:
$(Q)./git-version.sh $(REV)
#dump-commands: validate
# @echo "Dumping book commands..."
# $(Q)xsltproc --nonet \
# --output $(RENDERTMP)/lfs-html.xml \
# --stringparam profile.revision $(REV) \
# stylesheets/lfs-xsl/profile.xsl \
# $(RENDERTMP)/lfs-full.xml
dump-commands: validate
@echo "Dumping book commands..."
# $(Q)rm -rf $(DUMPDIR)
$(Q)rm -rf $(DUMPDIR)
# $(Q)xsltproc --output $(DUMPDIR)/ \
# stylesheets/dump-commands.xsl \
# $(RENDERTMP)/lfs-html.xml
# @echo "Dumping book commands complete in $(DUMPDIR)"
$(Q)xsltproc --output $(DUMPDIR)/ \
stylesheets/dump-commands.xsl \
$(RENDERTMP)/lfs-full.xml
@echo "Dumping book commands complete in $(DUMPDIR)"
all: book nochunks pdf # dump-commands
all: book nochunks pdf dump-commands
.PHONY : all book dump-commands nochunks pdf profile-html tmpdir validate md5sums wget-list version

View File

@ -1186,7 +1186,7 @@
<segtitle>&external;</segtitle>
<seglistitem>
<seg>
<ulink url="&blfs-book;general/pcre.html">PCRE</ulink>
<ulink url="&blfs-book;general/pcre2.html">PCRE2</ulink>
and
<ulink url="&blfs-book;general/libsigsegv.html">libsigsegv</ulink>
</seg>
@ -2356,7 +2356,7 @@
<segmentedlist id="patch-rundeps">
<segtitle>&runtime;</segtitle>
<seglistitem>
<seg>Glibc and Patch</seg>
<seg>Glibc</seg>
</seglistitem>
</segmentedlist>

View File

@ -1,3 +1,6 @@
2022-09-30 Bruce Dubbs <bdubbs@linuxfromscratch.org>
* Mount /dev/shm as a tmpfs.
2022-07-23 Thomas Trepl <thomas@linuxfromscratch.org>
* Mark an raid array clean when root (/) has been remounted
r/o when system goes down. Otherwise, array does remain

View File

@ -38,8 +38,8 @@ case "${1}" in
mount /run || failed=1
fi
mkdir -p /run/lock /run/shm
chmod 1777 /run/shm /run/lock
mkdir -p /run/lock
chmod 1777 /run/lock
log_info_msg "Mounting virtual file systems: ${INFO}/run"
@ -58,7 +58,9 @@ case "${1}" in
mount -o mode=0755,nosuid /dev || failed=1
fi
ln -sfn /run/shm /dev/shm
mkdir -p /dev/shm
log_info_msg2 " ${INFO}/dev/shm"
mount -o nosuid,nodev /dev/shm || failed=1
(exit ${failed})
evaluate_retval

View File

@ -20,8 +20,8 @@
# Should-Stop: $local_fs
# Default-Start: S
# Default-Stop: 0 6
# Short-Description: Mounts and unmounts swap partitions.
# Description: Mounts and unmounts swap partitions defined in
# Short-Description: Activates and deactivates swap partitions.
# Description: Activates and deactivates swap partitions defined in
# /etc/fstab.
# X-LFS-Provided-By: LFS
### END INIT INFO

View File

@ -40,6 +40,69 @@
appropriate for the entry or if needed the entire day's listitem.
-->
<listitem>
<para>2022-10-01</para>
<itemizedlist>
<listitem>
<para>[bdubbs] - Update to iana-etc-20220922. Addresses
<ulink url="&lfs-ticket-root;5006">#5006</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to tzdata-2022d. Fixes
<ulink url="&lfs-ticket-root;5119">#5119</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to readline-8.2. Fixes
<ulink url="&lfs-ticket-root;5121">#5121</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to linux-5.19.12. Fixes
<ulink url="&lfs-ticket-root;5115">#5115</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to libffi-3.4.3. Fixes
<ulink url="&lfs-ticket-root;5116">#5116</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to libcap-2.66. Fixes
<ulink url="&lfs-ticket-root;512">#5120</ulink>.</para>
</listitem>
<listitem revision="systemd">
<para>[bdubbs] - Update to dbus-1.14.2. Fixes
<ulink url="&lfs-ticket-root;5123">#5123</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to bc-6.0.4. Fixes
<ulink url="&lfs-ticket-root;5114">#5114</ulink>.</para>
</listitem>
<listitem>
<para>[bdubbs] - Update to bash-5.2. Fixes
<ulink url="&lfs-ticket-root;5122">#5122</ulink>.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>2022-09-22</para>
<itemizedlist>
<listitem>
<para>[bdubbs] - Update to expat-2.4.9 (Security Update). Fixes
<ulink url="&lfs-ticket-root;5117">#5117</ulink>.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>2022-09-20</para>
<itemizedlist>
<listitem>
<para>[bdubbs] - Adapt instructions depending on
host setup of /dev/shm when creating virtual filesystems
for chroot.</para>
</listitem>
</itemizedlist>
</listitem>
<listitem>
<para>2022-09-15</para>
<itemizedlist>

View File

@ -46,7 +46,7 @@
important issues you need be aware of before beginning to
work your way through <xref linkend="chapter-cross-tools"/> and beyond.</para>
<para><xref linkend="chapter-cross-tools"/>, explains the installation of
<para><xref linkend="chapter-cross-tools"/> explains the installation of
the initial tool chain, (binutils, gcc, and glibc) using cross compilation
techniques to isolate the new tools from the host system.</para>
@ -61,7 +61,7 @@
seem excessive. A full technical explanation as to why this is done is
provided in <xref linkend="ch-tools-toolchaintechnotes"/>.</para>
<para>In <xref linkend="chapter-building-system"/>, The
<para>In <xref linkend="chapter-building-system"/> the
full LFS system is built. Another advantage provided by the chroot
environment is that it allows you to continue using the host system
while LFS is being built. While waiting for package compilations to

View File

@ -11,6 +11,14 @@
<title>What's new since the last release</title>
<para>In 11.3 release, <parameter>--enable-default-pie</parameter>
and <parameter>--enable-default-ssp</parameter> are enabled for GCC.
They can mitigate some type of malicious attacks but they cannot provide
a full protection. In case if you are reading a programming textbook,
you may need to disable PIE and SSP with GCC options
<parameter>-fno-pie -no-pie -fno-stack-protection</parameter>
because some textbooks assume they were disabled by default.</para>
<para>Below is a list of package updates made since the previous
release of the book.</para>
@ -38,9 +46,9 @@
<!--<listitem>
<para>Automake-&automake-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Bash &bash-version;</para>
</listitem>-->
</listitem>
<listitem>
<para>Bc &bc-version;</para>
</listitem>
@ -62,9 +70,9 @@
<!--<listitem>
<para>DejaGNU-&dejagnu-version;</para>
</listitem>-->
<!--<listitem revision="systemd">
<listitem revision="systemd">
<para>D-Bus-&dbus-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Diffutils-&diffutils-version;</para>
</listitem>-->
@ -74,9 +82,9 @@
<!--<listitem revision="sysv">
<para>Eudev-&eudev-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Expat-&expat-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Expect-&expect-version;</para>
</listitem>-->
@ -122,9 +130,9 @@
<!--<listitem>
<para>Gzip-&gzip-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>IANA-Etc-&iana-etc-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Inetutils-&inetutils-version;</para>
</listitem>-->
@ -149,15 +157,15 @@
<!--<listitem>
<para>LFS-Bootscripts-&lfs-bootscripts-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Libcap-&libcap-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Libelf-&elfutils-version; (from elfutils)</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Libffi-&libffi-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Libpipeline-&libpipeline-version;</para>
</listitem>-->
@ -218,9 +226,9 @@
<listitem>
<para>Python-&python-version;</para>
</listitem>
<!--<listitem>
<listitem>
<para>Readline-&readline-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Sed-&sed-version;</para>
</listitem>-->
@ -245,9 +253,9 @@
<!--<listitem>
<para>Texinfo-&texinfo-version;</para>
</listitem>-->
<!--<listitem>
<listitem>
<para>Tzdata-&tzdata-version;</para>
</listitem>-->
</listitem>
<!--<listitem>
<para>Util-Linux-&util-linux-version;</para>
</listitem>-->

View File

@ -14,8 +14,8 @@
be used several times. You should ensure that this variable is always defined
throughout the LFS build process. It should be set to the name of the
directory where you will be building your LFS system - we will use
<filename class="directory">/mnt/lfs</filename> as an example, but the
directory choice is up to you. If you are building LFS on a separate
<filename class="directory">/mnt/lfs</filename> as an example, but you may
choose any directory name you want. If you are building LFS on a separate
partition, this directory will be the mount point for the partition.
Choose a directory location and set the variable with the
following command:</para>
@ -25,7 +25,7 @@
<para>Having this variable set is beneficial in that commands such as
<command>mkdir -v $LFS/tools</command> can be typed literally. The shell
will automatically replace <quote>$LFS</quote> with
<quote>/mnt/lfs</quote> (or whatever the variable was set to) when it
<quote>/mnt/lfs</quote> (or whatever value the variable was set to) when it
processes the command line.</para>
<caution>
@ -49,7 +49,7 @@
personal home directory and in <filename>/root/.bash_profile</filename> and
enter the export command above. In addition, the shell specified in the
<filename>/etc/passwd</filename> file for all users that need the
<envar>LFS</envar> variable needs to be bash to ensure that the
<envar>LFS</envar> variable must be bash to ensure that the
<filename>/root/.bash_profile</filename> file is incorporated as a part of
the login process.</para>
@ -59,9 +59,9 @@
a virtual terminal is started. In this case, add the export command to
the <filename>.bashrc</filename> file for the user and
<systemitem class="username">root</systemitem>. In addition,
some distributions have instructions to not run the <filename>.bashrc</filename>
instructions in a non-interactive bash invocation. Be sure to add the
export command before the test for non-interactive use.</para>
some distributions use an "if" test, and do not run the remaining <filename>.bashrc</filename>
instructions for a non-interactive bash invocation. Be sure to place the
export command ahead of the test for non-interactive use.</para>
</note>

View File

@ -10,10 +10,19 @@
<title>Creating a File System on the Partition</title>
<para>Now that a blank partition has been set up, the file system can be
created. LFS can use any file system recognized by the Linux kernel, but the
most common types are ext3 and ext4. The choice of file system can be
complex and depends on the characteristics of the files and the size of
<para>A partition is just a range of sectors on a disk drive, delimited by
boundaries set in a partition table. Before the operating system can use
a partition to store any files, the partition must be formatted to contain a file
system, typically consisting of a label, directory blocks, data blocks, and
an indexing scheme to locate a particular file on demand. The file system
also helps the OS keep track of free space on the partition, reserve the
needed sectors when a new file is created or an existing file is extended,
and recycle the free data segments created when files are deleted. It may
also provide support for data redundancy, and for error recovery.</para>
<para>LFS can use any file system recognized by the Linux kernel, but the
most common types are ext3 and ext4. The choice of the right file system can be
complex; it depends on the characteristics of the files and the size of
the partition. For example:</para>
<variablelist>
@ -33,22 +42,22 @@
</varlistentry>
<varlistentry>
<term>ext4</term>
<listitem><para>is the latest version of the ext file system family of
partition types. It provides several new capabilities including
nano-second timestamps, creation and use of very large files (16 TB), and
speed improvements.</para>
<listitem><para>is the latest version of the ext family of
file systems. It provides several new capabilities including
nano-second timestamps, creation and use of very large files
(up to 16 TB), and speed improvements.</para>
</listitem>
</varlistentry>
</variablelist>
<para>Other file systems, including FAT32, NTFS, ReiserFS, JFS, and XFS are
useful for specialized purposes. More information about these file systems
can be found at <ulink
useful for specialized purposes. More information about these file systems,
and many others, can be found at <ulink
url="https://en.wikipedia.org/wiki/Comparison_of_file_systems"/>.</para>
<para>LFS assumes that the root file system (/) is of type ext4. To create
an <systemitem class="filesystem">ext4</systemitem> file system on the LFS
partition, run the following:</para>
partition, issue the following command:</para>
<screen role="nodump"><userinput>mkfs -v -t ext4 /dev/<replaceable>&lt;xxx&gt;</replaceable></userinput></screen>

View File

@ -94,10 +94,10 @@
<para>Swapping is never good. For mechanical hard drives you can generally
tell if a system is swapping by just listening to disk activity and
observing how the system reacts to commands. For an SSD drive you will not
be able to hear swapping but you can tell how much swap space is being used
by the <command>top</command> or <command>free</command> programs. Use of
an SSD drive for a swap partition should be avoided if possible. The first
observing how the system reacts to commands. With an SSD you will not
be able to hear swapping, but you can tell how much swap space is being used
by running the <command>top</command> or <command>free</command> programs. Use of
an SSD for a swap partition should be avoided if possible. The first
reaction to swapping should be to check for an unreasonable command such as
trying to edit a five gigabyte file. If swapping becomes a normal
occurrence, the best solution is to purchase more RAM for your
@ -112,12 +112,12 @@
must be available for GRUB to use during installation of the boot
loader. This partition will normally be labeled 'BIOS Boot' if using
<command>fdisk</command> or have a code of <emphasis>EF02</emphasis> if
using <command>gdisk</command>.</para>
using the <command>gdisk</command> command.</para>
<note><para>The Grub Bios partition must be on the drive that the BIOS
uses to boot the system. This is not necessarily the same drive where the
LFS root partition is located. Disks on a system may use different
partition table types. The requirement for this partition depends
uses to boot the system. This is not necessarily the drive that holds
the LFS root partition. The disks on a system may use different
partition table types. The necessity of the Grub Bios partition depends
only on the partition table type of the boot disk.</para></note>
</sect3>
@ -133,7 +133,7 @@
<listitem><para>/boot &ndash; Highly recommended. Use this partition to
store kernels and other booting information. To minimize potential boot
problems with larger disks, make this the first physical partition on
your first disk drive. A partition size of 200 megabytes is quite
your first disk drive. A partition size of 200 megabytes is
adequate.</para></listitem>
<listitem><para>/boot/efi &ndash; The EFI System Partition, which is
@ -150,41 +150,50 @@
<filename class="directory">/bin</filename>,
<filename class="directory">/lib</filename>, and
<filename class="directory">/sbin</filename> are symlinks to their
counterpart in <filename class="directory">/usr</filename>.
So <filename class="directory">/usr</filename> contains all binaries
counterparts in <filename class="directory">/usr</filename>.
So <filename class="directory">/usr</filename> contains all the binaries
needed for the system to run. For LFS a separate partition for
<filename class="directory">/usr</filename> is normally not needed.
If you need it anyway, you should make a partition large enough to
fit all programs and libraries in the system. The root partition can be
If you create it anyway, you should make a partition large enough to
fit all the programs and libraries in the system. The root partition can be
very small (maybe just one gigabyte) in this configuration, so it's
suitable for a thin client or diskless workstation (where
<filename class="directory">/usr</filename> is mounted from a remote
server). However you should take care that an initramfs (not covered by
LFS) will be needed to boot a system with separate
server). However, you should be aware that an initramfs (not covered by
LFS) will be needed to boot a system with a separate
<filename class="directory">/usr</filename> partition.</para></listitem>
<listitem><para>/opt &ndash; This directory is most useful for
BLFS where multiple installations of large packages like Gnome or KDE can
BLFS, where multiple large packages like KDE or Texlive can
be installed without embedding the files in the /usr hierarchy. If
used, 5 to 10 gigabytes is generally adequate.</para>
</listitem>
<listitem><para>/tmp &ndash; A separate /tmp directory is rare, but
useful if configuring a thin client. This partition, if used, will
usually not need to exceed a couple of gigabytes.</para></listitem>
<listitem revision='sysv'><para>/tmp &ndash; A separate /tmp directory
is rare, but useful if configuring a thin client. This partition, if
used, will usually not need to exceed a couple of
gigabytes. If you have enough RAM, you can mount a
<systemitem class='filesystem'>tmpfs</systemitem> on /tmp to make
access to temporary files faster.</para></listitem>
<listitem revision='systemd'><para>/tmp &ndash; By default, systemd
mounts a <systemitem class='filesystem'>tmpfs</systemitem> here.
If you want to override that behavior, follow
<xref linkend='systemd-no-tmpfs'/> when configuring the LFS
system.</para></listitem>
<listitem><para>/usr/src &ndash; This partition is very
useful for providing a location to store BLFS source files and
share them across LFS builds. It can also be used as a location
for building BLFS packages. A reasonably large partition of 30-50
gigabytes allows plenty of room.</para></listitem>
gigabytes provides plenty of room.</para></listitem>
</itemizedlist>
<para>Any separate partition that you want automatically mounted upon boot
needs to be specified in the <filename>/etc/fstab</filename>. Details
about how to specify partitions will be discussed in <xref
linkend="ch-bootable-fstab"/>. </para>
<para>Any separate partition that you want automatically mounted when the
system starts must be specified in the <filename>/etc/fstab</filename> file.
Details about how to specify partitions will be discussed in <xref
linkend="ch-bootable-fstab"/>.</para>
</sect3>
</sect2>

View File

@ -36,7 +36,7 @@
<listitem>
<para><emphasis role="strong">Bison-2.7</emphasis> (/usr/bin/yacc
should be a link to bison or small script that executes bison)</para>
should be a link to bison or a small script that executes bison)</para>
</listitem>
<listitem>

View File

@ -10,24 +10,28 @@
<title>Mounting the New Partition</title>
<para>Now that a file system has been created, the partition needs to
be made accessible. In order to do this, the partition needs to be
mounted at a chosen mount point. For the purposes of this book, it is
assumed that the file system is mounted under the directory specified by the
<envar>LFS</envar> environment variable as described in the previous section.
<para>Now that a file system has been created, the partition must
be mounted so the host system can access it. This book assumes that
the file system is mounted at the directory specified by the
<envar>LFS</envar> environment variable described in the previous section.
</para>
<para>Create the mount point and mount the LFS file system by running:</para>
<para>Strictly speaking, one cannot "mount a partition". One mounts the <emphasis>file
system</emphasis> embedded in that partition. But since a single partition can't contain
more than one file system, people often speak of the partition and the
associated file system as if they were one and the same.</para>
<para>Create the mount point and mount the LFS file system with these commands:</para>
<screen role="nodump"><userinput>mkdir -pv $LFS
mount -v -t ext4 /dev/<replaceable>&lt;xxx&gt;</replaceable> $LFS</userinput></screen>
<para>Replace <replaceable>&lt;xxx&gt;</replaceable> with the designation of the LFS
<para>Replace <replaceable>&lt;xxx&gt;</replaceable> with the name of the LFS
partition.</para>
<para>If using multiple partitions for LFS (e.g., one for <filename
class="directory">/</filename> and another for <filename
class="directory">/home</filename>), mount them using:</para>
<para>If you are using multiple partitions for LFS (e.g., one for
<filename class="directory">/</filename> and another for <filename
class="directory">/home</filename>), mount them like this:</para>
<screen role="nodump"><userinput>mkdir -pv $LFS
mount -v -t ext4 /dev/<replaceable>&lt;xxx&gt;</replaceable> $LFS
@ -43,13 +47,14 @@ mount -v -t ext4 /dev/<replaceable>&lt;yyy&gt;</replaceable> $LFS/home</userinpu
<option>nodev</option> options). Run the <command>mount</command> command
without any parameters to see what options are set for the mounted LFS
partition. If <option>nosuid</option> and/or <option>nodev</option> are set,
the partition will need to be remounted.</para>
the partition must be remounted.</para>
<warning><para>The above instructions assume that you will not be restarting
<warning><para>The above instructions assume that you will not restart
your computer throughout the LFS process. If you shut down your system,
you will either need to remount the LFS partition each time you restart
the build process or modify your host system's /etc/fstab file to automatically
remount it upon boot. For example:
the build process, or modify the host system's &fstab; file to automatically
remount it when you reboot. For example, you might add this line to your
&fstab; file:
<screen role="nodump">/dev/<replaceable>&lt;xxx&gt;</replaceable> /mnt/lfs ext4 defaults 1 1</screen>
@ -67,7 +72,7 @@ mount -v -t ext4 /dev/<replaceable>&lt;yyy&gt;</replaceable> $LFS/home</userinpu
<para>Replace <replaceable>&lt;zzz&gt;</replaceable> with the name of the
<systemitem class="filesystem">swap</systemitem> partition.</para>
<para>Now that there is an established place to work, it is time to
<para>Now that the new LFS partition is open for business, it's time to
download the packages.</para>
</sect1>

View File

@ -34,7 +34,7 @@
</sect2>
<sect2>
<title>Chapter&nbsp;5&ndash;6</title>
<title>Chapters&nbsp;5&ndash;6</title>
<itemizedlist>
<listitem>
@ -44,8 +44,8 @@
<listitem>
<para>These two chapters <emphasis>must</emphasis> be done as user
<systemitem class="username">lfs</systemitem>.
A <command>su - lfs</command> needs to be done before any task in these
chapters. Failing to do that, you are at risk of installing packages to the
A <command>su - lfs</command> command must be issued before any task in these
chapters. If you don't do that, you are at risk of installing packages to the
host, and potentially rendering it unusable.</para>
</listitem>
@ -54,13 +54,13 @@
are critical. If there is any
doubt about installing a package, ensure any previously expanded
tarballs are removed, then re-extract the package files, and complete all
instructions in that section.</para>
the instructions in that section.</para>
</listitem>
</itemizedlist>
</sect2>
<sect2>
<title>Chapter&nbsp;7&ndash;10</title>
<title>Chapters&nbsp;7&ndash;10</title>
<itemizedlist>
<listitem>
@ -69,7 +69,7 @@
<listitem>
<para>A few operations, from <quote>Changing Ownership</quote> to
<quote>Entering the Chroot Environment</quote> must be done as the
<quote>Entering the Chroot Environment</quote>, must be done as the
<systemitem class="username">root</systemitem> user, with the LFS
environment variable set for the &root; user.</para>
</listitem>
@ -77,7 +77,7 @@
<listitem>
<para> When entering chroot, the LFS environment variable must be set
for <systemitem class="username">root</systemitem>. The LFS
variable is not used afterwards.</para>
variable is not used after entering the chroot environment.</para>
</listitem>
<listitem>

View File

@ -13,17 +13,17 @@
<para>This chapter includes a list of packages that need to be downloaded in
order to build a basic Linux system. The listed version numbers correspond to
versions of the software that are known to work, and this book is based on
their use. We highly recommend against using different versions because the build
their use. We highly recommend against using different versions, because the build
commands for one version may not work with a different version, unless the
different version is specified by a LFS errata or security advisory.
different version is specified by an LFS erratum or security advisory.
The newest package versions may also have problems that require
work-arounds. These work-arounds will be developed and stabilized in the
development version of the book.</para>
<para>For some packages, the release tarball and the (Git or SVN)
repository snapshot tarball for this release may be published with
similar file name. A release tarball contains generated files (for
example, <command>configure</command> script generated by
repository snapshot tarball for that release may be published with
similar file names. A release tarball contains generated files (for
example, a <command>configure</command> script generated by
<command>autoconf</command>), in addition to the contents of the
corresponding repository snapshot. The book uses release tarballs
whenever possible. Using a repository snapshot instead of a release
@ -69,7 +69,7 @@
</listitem>
<listitem>
<para>For stable versions of the book, a tarball of all the needed files
can be downloaded from one of the LFS files mirrors listed at
can be downloaded from one of the mirror sites listed at
<ulink url="https://www.linuxfromscratch.org/mirrors.html#files"/>.</para>
</listitem>
<listitem>

View File

@ -15,14 +15,14 @@
before downloading packages to figure out if a newer version of any
package should be used to avoid security vulnerabilities.</para>
<para>The upstreams may remove old releases, especially when these
<para>The upstream sources may remove old releases, especially when those
releases contain a security vulnerability. If one URL below is not
reachable, you should read the security advisories first to figure out
if a newer version (with the vulnerability fixed) should be used. If
not, try to download the removed package from a mirror. Although it's
possible to download an old release from a mirror even if this release
has been removed because of a vulnerability, it's not recommended to
use a release known to be vulnerable for building your system.</para>
has been removed because of a vulnerability, it's not a good idea to
use a release known to be vulnerable when building your system.</para>
</note>
<para>Download or otherwise obtain the following packages:</para>
@ -462,7 +462,7 @@
<para>MD5 sum: <literal>&linux-md5;</literal></para>
<note>
<para>The Linux kernel is updated relatively often, many times due to
<para>The Linux kernel is updated quite frequently, many times due to
discoveries of security vulnerabilities. The latest available
<!--&linux-major-version;.&linux-minor-version;.x--> stable kernel
version <!--should--> may be

View File

@ -13,25 +13,25 @@
<para>Many people would like to know beforehand approximately how long
it takes to compile and install each package. Because Linux From
Scratch can be built on many different systems, it is impossible to
provide accurate time estimates. The biggest package (Glibc) will
provide absolute time estimates. The biggest package (Glibc) will
take approximately 20 minutes on the fastest systems, but could take
up to three days on slower systems! Instead of providing actual times,
the Standard Build Unit (SBU) measure will be
used instead.</para>
<para>The SBU measure works as follows. The first package to be compiled
from this book is binutils in <xref linkend="chapter-cross-tools"/>. The
time it takes to compile this package is what will be referred to as the
Standard Build Unit or SBU. All other compile times will be expressed relative
to this time.</para>
is binutils in <xref linkend="chapter-cross-tools"/>. The
time it takes to compile this package is what we will refer to as the
Standard Build Unit or SBU. All other compile times will be expressed in
terms of this unit of time.</para>
<para>For example, consider a package whose compilation time is 4.5
SBUs. This means that if a system took 10 minutes to compile and
SBUs. This means that if your system took 10 minutes to compile and
install the first pass of binutils, it will take
<emphasis>approximately</emphasis> 45 minutes to build this example package.
Fortunately, most build times are shorter than the one for binutils.</para>
<emphasis>approximately</emphasis> 45 minutes to build the example package.
Fortunately, most build times are shorter than one SBU.</para>
<para>In general, SBUs are not entirely accurate because they depend on many
<para>SBUs are not entirely accurate because they depend on many
factors, including the host system's version of GCC. They are provided here
to give an estimate of how long it might take to install a package, but the
numbers can vary by as much as dozens of minutes in some cases.</para>
@ -45,15 +45,15 @@
<screen role="nodump"><userinput>export MAKEFLAGS='-j4'</userinput></screen>
<para>or just building with:</para>
<para>or by building with:</para>
<screen role="nodump"><userinput>make -j4</userinput></screen>
<para>When multiple processors are used in this way, the SBU units in the
book will vary even more than they normally would. In some cases, the make
step will simply fail. Analyzing the output of the build process will also
be more difficult because the lines of different processes will be
interleaved. If you run into a problem with a build step, revert back to a
be more difficult because the lines from different processes will be
interleaved. If you run into a problem with a build step, revert to a
single processor build to properly analyze the error messages.</para>
</note>

View File

@ -27,21 +27,21 @@
<note>
<para>Running the test suites in <xref linkend="chapter-cross-tools"/>
and <xref linkend="chapter-temporary-tools"/>
is impossible, since the programs are compiled with a cross-compiler,
so are not supposed to be able to run on the build host.</para>
is pointless; since the test programs are compiled with a cross-compiler,
they probably can't run on the build host.</para>
</note>
<para>A common issue with running the test suites for binutils and GCC
is running out of pseudo terminals (PTYs). This can result in a high
is running out of pseudo terminals (PTYs). This can result in a large
number of failing tests. This may happen for several reasons, but the
most likely cause is that the host system does not have the
<systemitem class="filesystem">devpts</systemitem> file system set up
correctly. This issue is discussed in greater detail at
<ulink url="&lfs-root;lfs/faq.html#no-ptys"/>.</para>
<para>Sometimes package test suites will fail, but for reasons which the
<para>Sometimes package test suites will fail for reasons which the
developers are aware of and have deemed non-critical. Consult the logs located
at <ulink url="&test-results;"/> to verify whether or not these failures are
expected. This site is valid for all tests throughout this book.</para>
expected. This site is valid for all test suites throughout this book.</para>
</sect1>

View File

@ -14,9 +14,9 @@
making a single mistake can damage or destroy a system. Therefore,
the packages in the next two chapters are built as an unprivileged user.
You could use your own user name, but to make it easier to set up a clean
working environment, create a new user called <systemitem
working environment, we will create a new user called <systemitem
class="username">lfs</systemitem> as a member of a new group (also named
<systemitem class="groupname">lfs</systemitem>) and use this user during
<systemitem class="groupname">lfs</systemitem>) and run commands as &lfs-user; during
the installation process. As <systemitem class="username">root</systemitem>,
issue the following commands to add the new user:</para>
@ -24,7 +24,7 @@
useradd -s /bin/bash -g lfs -m -k /dev/null lfs</userinput></screen>
<variablelist>
<title>The meaning of the command line options:</title>
<title>This is what the command line options mean:</title>
<varlistentry>
<term><parameter>-s /bin/bash</parameter></term>
@ -54,7 +54,7 @@ useradd -s /bin/bash -g lfs -m -k /dev/null lfs</userinput></screen>
<term><parameter>-k /dev/null</parameter></term>
<listitem>
<para>This parameter prevents possible copying of files from a skeleton
directory (default is <filename class="directory">/etc/skel</filename>)
directory (the default is <filename class="directory">/etc/skel</filename>)
by changing the input location to the special null device.</para>
</listitem>
</varlistentry>
@ -68,17 +68,17 @@ useradd -s /bin/bash -g lfs -m -k /dev/null lfs</userinput></screen>
</variablelist>
<para>To log in as <systemitem class="username">lfs</systemitem> (as opposed
to switching to user <systemitem class="username">lfs</systemitem> when logged
in as <systemitem class="username">root</systemitem>, which does not require
the <systemitem class="username">lfs</systemitem> user to have a password),
give <systemitem class="username">lfs</systemitem> a password:</para>
<para>If you want to log in as &lfs-user; or switch to &lfs-user; from a
non-&root; user (as opposed to switching to user &lfs-user;
when logged in as &root;, which does not require the &lfs-user; user to
have a password), you need to set a password of &lfs-user;. Issue the
following command as the &root; user to set the password:</para>
<screen role="nodump"><userinput>passwd lfs</userinput></screen>
<para>Grant <systemitem class="username">lfs</systemitem> full access to
all directories under <filename class="directory">$LFS</filename> by making
<systemitem class="username">lfs</systemitem> the directory owner:</para>
all the directories under <filename class="directory">$LFS</filename> by making
<systemitem class="username">lfs</systemitem> the owner:</para>
<screen><userinput>chown -v lfs $LFS/{usr{,/*},lib,var,etc,bin,sbin,tools}
case $(uname -m) in
@ -88,20 +88,20 @@ esac</userinput></screen>
<screen arch="ml_x32" ><userinput>chown -v lfs $LFS/libx32</userinput></screen>
<screen arch="ml_all" ><userinput>chown -v lfs $LFS/{lib32,libx32}</userinput></screen>
<note><para>In some host systems, the following command does not complete
properly and suspends the login to the &lfs-user; user to the background.
<note><para>In some host systems, the following <command>su</command> command does not complete
properly and suspends the login for the &lfs-user; user to the background.
If the prompt "lfs:~$" does not appear immediately, entering the
<command>fg</command> command will fix the issue.</para></note>
<para>Next, login as user <systemitem class="username">lfs</systemitem>.
This can be done via a virtual console, through a display manager, or with
the following substitute/switch user command:</para>
<para>Next, start a shell running as user &lfs-user;. This can be done by
logging in as &lfs-user; on a virtual console, or with the following
substitute/switch user command:</para>
<screen role="nodump"><userinput>su - lfs</userinput></screen>
<para>The <quote><parameter>-</parameter></quote> instructs
<command>su</command> to start a login shell as opposed to a non-login shell.
The difference between these two types of shells can be found in detail in
The difference between these two types of shells is described in detail in
<filename>bash(1)</filename> and <command>info bash</command>.</para>
</sect1>

View File

@ -10,14 +10,15 @@
<title>Creating a limited directory layout in LFS filesystem</title>
<para>The first task performed in the LFS partition is to create a limited
directory hierarchy so that programs compiled in <xref
<para>In this section, we begin populating the LFS filesystem with the
pieces that will constitute the final Linux system. The first step is to
create a limited directory hierarchy, so that the programs compiled in <xref
linkend="chapter-temporary-tools"/> (as well as glibc and libstdc++ in <xref
linkend="chapter-cross-tools"/>) may be installed in their final
location. This is needed so that those temporary programs be overwritten
when rebuilding them in <xref linkend="chapter-building-system"/>.</para>
linkend="chapter-cross-tools"/>) can be installed in their final
location. We do this so those temporary programs will be overwritten when
the final versions are built in <xref linkend="chapter-building-system"/>.</para>
<para>Create the required directory layout by running the following as
<para>Create the required directory layout by issuing the following commands as
<systemitem class="username">root</systemitem>:</para>
<screen><userinput>mkdir -pv $LFS/{etc,var} $LFS/usr/{bin,lib,sbin}
@ -38,10 +39,10 @@ ln -sv usr/lib32 $LFS/lib32
ln -sv usr/libx32 $LFS/libx32</userinput></screen>
<para>Programs in <xref linkend="chapter-temporary-tools"/> will be compiled
with a cross-compiler (more details in section <xref
linkend="ch-tools-toolchaintechnotes"/>). In order to separate this
cross-compiler from the other programs, it will be installed in a special
directory. Create this directory with:</para>
with a cross-compiler (more details can be found in section <xref
linkend="ch-tools-toolchaintechnotes"/>). This cross-compiler will be installed
in a special directory, to separate it from the other programs. Still acting as
&root;, create that directory with this command:</para>
<screen><userinput>mkdir -pv $LFS/tools</userinput></screen>

View File

@ -12,11 +12,11 @@
<para>In this chapter, we will perform a few additional tasks to prepare
for building the temporary system. We will create a set of directories in
<filename class="directory">$LFS</filename> for the installation of the
temporary tools, add an unprivileged user to reduce risk,
<filename class="directory">$LFS</filename> (in which we will install the
temporary tools), add an unprivileged user,
and create an appropriate build environment for that user. We will also
explain the unit of time we use to measure how long LFS packages take to
build, or <quote>SBUs</quote>, and give some information about package
explain the units of time (<quote>SBUs</quote>) we use to measure how
long it takes to build LFS packages, and provide some information about package
test suites.</para>
</sect1>

View File

@ -19,8 +19,10 @@
<literal>exec env -i HOME=$HOME TERM=$TERM PS1='\u:\w\$ ' /bin/bash</literal>
EOF</userinput></screen>
<para>When logged on as user <systemitem class="username">lfs</systemitem>,
the initial shell is usually a <emphasis>login</emphasis> shell which reads
<para>When logged on as user <systemitem class="username">lfs</systemitem>
or switched to the &lfs-user; user using a <command>su</command> command
with <quote><parameter>-</parameter></quote> option,
the initial shell is a <emphasis>login</emphasis> shell which reads
the <filename>/etc/profile</filename> of the host (probably containing some
settings and environment variables) and then <filename>.bash_profile</filename>.
The <command>exec env -i.../bin/bash</command> command in the
@ -32,7 +34,7 @@ EOF</userinput></screen>
ensuring a clean environment.</para>
<para>The new instance of the shell is a <emphasis>non-login</emphasis>
shell, which does not read, and execute, the contents of <filename>/etc/profile</filename> or
shell, which does not read, and execute, the contents of the <filename>/etc/profile</filename> or
<filename>.bash_profile</filename> files, but rather reads, and executes, the
<filename>.bashrc</filename> file instead. Create the
<filename>.bashrc</filename> file now:</para>
@ -73,10 +75,10 @@ EOF</userinput></screen>
<para>The <command>set +h</command> command turns off
<command>bash</command>'s hash function. Hashing is ordinarily a useful
feature&mdash;<command>bash</command> uses a hash table to remember the
full path of executable files to avoid searching the <envar>PATH</envar>
full path to executable files to avoid searching the <envar>PATH</envar>
time and again to find the same executable. However, the new tools should
be used as soon as they are installed. By switching off the hash function,
the shell will always search the <envar>PATH</envar> when a program is to
be used as soon as they are installed. Switching off the hash function forces
the shell to search the <envar>PATH</envar> whenever a program is to
be run. As such, the shell will find the newly compiled tools in
<filename class="directory">$LFS/tools/bin</filename> as soon as they are
available without remembering a previous version of the same program
@ -129,10 +131,10 @@ EOF</userinput></screen>
<varlistentry>
<term><parameter>PATH=/usr/bin</parameter></term>
<listitem>
<para>Many modern linux distributions have merged <filename
<para>Many modern Linux distributions have merged <filename
class="directory">/bin</filename> and <filename
class="directory">/usr/bin</filename>. When this is the case, the standard
<envar>PATH</envar> variable needs just to be set to <filename
<envar>PATH</envar> variable should be set to <filename
class="directory">/usr/bin/</filename> for the <xref
linkend="chapter-temporary-tools"/> environment. When this is not the
case, the following line adds <filename class="directory">/bin</filename>
@ -155,7 +157,7 @@ EOF</userinput></screen>
standard <envar>PATH</envar>, the cross-compiler installed at the beginning
of <xref linkend="chapter-cross-tools"/> is picked up by the shell
immediately after its installation. This, combined with turning off hashing,
limits the risk that the compiler from the host be used instead of the
limits the risk that the compiler from the host is used instead of the
cross-compiler.</para>
</listitem>
</varlistentry>
@ -209,7 +211,8 @@ EOF</userinput></screen>
</important>
<para>Finally, to have the environment fully prepared for building the
temporary tools, source the just-created user profile:</para>
temporary tools, force the <command>bash</command> shell to read
the new user profile:</para>
<screen><userinput>source ~/.bash_profile</userinput></screen>

View File

@ -18,10 +18,10 @@
<screen><userinput>rm -rf /usr/share/{info,man,doc}/*</userinput></screen>
<para>Second, the libtool .la files are only useful when linking with static
libraries. They are unneeded and potentially harmful when using dynamic
shared libraries, especially when using non-autotools build systems.
While still in chroot, remove those files now:</para>
<para>Second, on a modern Linux system, the libtool .la files are only
useful for libltdl. No libraries in LFS are expected to be loaded by
libltdl, and it's known that some .la files can cause BLFS packages
fail to build. Remove those files now:</para>
<screen><userinput>find /usr/{lib,libexec} -name \*.la -delete</userinput><userinput arch="ml_32">
find /usr/lib32 -name \*.la -delete</userinput><userinput arch="ml_x32">
@ -98,7 +98,8 @@ find /usr/lib{,x}32 -name \*.la -delete</userinput></screen>
<para>Before making a backup, unmount the virtual file systems:</para>
<screen role="nodump"><userinput>umount $LFS/dev/pts
<screen role="nodump"><userinput>mountpoint -q $LFS/dev/shm &amp;&amp; umount $LFS/dev/shm
umount $LFS/dev/pts
umount $LFS/{sys,proc,run,dev}</userinput></screen>
<para>

View File

@ -10,10 +10,10 @@
<title>Creating Directories</title>
<para>It is time to create the full structure in the LFS file system.</para>
<para>It is time to create the full directory structure in the LFS file system.</para>
<note><para>Some of the directories mentioned in this section may be
already created earlier with explicit instructions or when installing some
<note><para>Some of the directories mentioned in this section may have
already been created earlier with explicit instructions, or when installing some
packages. They are repeated below for completeness.</para></note>
<para>Create some root-level directories that are not in the limited set
@ -45,14 +45,14 @@ install -dv -m 1777 /tmp /var/tmp</userinput></screen>
support has already been created while previous installation steps.</para>
<para>Directories are, by default, created with permission mode 755, but
this is not desirable for all directories. In the commands above, two
this is not desirable everywhere. In the commands above, two
changes are made&mdash;one to the home directory of user <systemitem
class="username">root</systemitem>, and another to the directories for
temporary files.</para>
<para>The first mode change ensures that not just anybody can enter
the <filename class="directory">/root</filename> directory&mdash;the
same as a normal user would do with his or her home directory. The
the <filename class="directory">/root</filename> directory&mdash;just
like a normal user would do with his or her own home directory. The
second mode change makes sure that any user can write to the
<filename class="directory">/tmp</filename> and <filename
class="directory">/var/tmp</filename> directories, but cannot remove
@ -62,14 +62,14 @@ install -dv -m 1777 /tmp /var/tmp</userinput></screen>
<sect2>
<title>FHS Compliance Note</title>
<para>The directory tree is based on the Filesystem Hierarchy Standard
<para>This directory tree is based on the Filesystem Hierarchy Standard
(FHS) (available at <ulink
url="https://refspecs.linuxfoundation.org/fhs.shtml"/>). The FHS also specifies
the optional existence of some directories such as <filename
the optional existence of additional directories such as <filename
class="directory">/usr/local/games</filename> and <filename
class="directory">/usr/share/games</filename>. We create only the
directories that are needed. However, feel free to create these
directories. </para>
class="directory">/usr/share/games</filename>. In LFS, we create only the
directories that are really necessary. However, feel free to create more
directories, if you wish. </para>
</sect2>

View File

@ -11,22 +11,22 @@
<title>Introduction</title>
<para>This chapter shows how to build the last missing bits of the temporary
system: the tools needed by the build machinery of various packages. Now
system: the tools needed to build the various packages. Now
that all circular dependencies have been resolved, a <quote>chroot</quote>
environment, completely isolated from the host operating system (except for
the running kernel), can be used for the build.</para>
<para>For proper operation of the isolated environment, some communication
with the running kernel must be established. This is done through the
so-called <emphasis>Virtual Kernel File Systems</emphasis>, which must be
mounted when entering the chroot environment. You may want to check
that they are mounted by issuing <command>findmnt</command>.</para>
with the running kernel must be established. This is done via the
so-called <emphasis>Virtual Kernel File Systems</emphasis>, which will be
mounted before entering the chroot environment. You may want to verify
that they are mounted by issuing the <command>findmnt</command> command.</para>
<para>Until <xref linkend="ch-tools-chroot"/>, the commands must be
run as <systemitem class="username">root</systemitem>, with the
<envar>LFS</envar> variable set. After entering chroot, all commands
are run as &root;, fortunately without access to the OS of the computer
you built LFS on. Be careful anyway, as it is easy to destroy the whole
LFS system with badly formed commands.</para>
LFS system with bad commands.</para>
</sect1>

View File

@ -14,12 +14,14 @@
<primary sortas="e-/dev/">/dev/*</primary>
</indexterm>
<para>Various file systems exported by the kernel are used to communicate to
and from the kernel itself. These file systems are virtual in that no disk
<para>Applications running in user space utilize various file
systems exported by the kernel to communicate
with the kernel itself. These file systems are virtual: no disk
space is used for them. The content of the file systems resides in
memory.</para>
memory. These file systems must be mounted in the $LFS directory tree
so the applications can find them in the chroot environment.</para>
<para>Begin by creating directories onto which the file systems will be
<para>Begin by creating directories on which the file systems will be
mounted:</para>
<screen><userinput>mkdir -pv $LFS/{dev,proc,sys,run}</userinput></screen>
@ -27,20 +29,31 @@
<sect2 id="ch-tools-bindmount">
<title>Mounting and Populating /dev</title>
<para>During a normal boot, the kernel automatically mounts the
<systemitem class="filesystem">devtmpfs</systemitem> filesystem on the
<filename class="directory">/dev</filename> directory, and allow the
devices to be created dynamically on that virtual filesystem as they
are detected or accessed. Device creation is generally done during the
boot process by the kernel and Udev.
Since this new system does not yet have Udev and
has not yet been booted, it is necessary to mount and populate
<filename class="directory">/dev</filename> manually. This is
accomplished by bind mounting the host system's
<para>During a normal boot of the LFS system, the kernel automatically
mounts the <systemitem class="filesystem">devtmpfs</systemitem>
filesystem on the
<filename class="directory">/dev</filename> directory; the kernel
creates device nodes on that virtual filesystem during the boot process
or when a device is first detected or accessed. The udev daemon may
change the owner or permission of the device nodes created by the
kernel, or create new device nodes or symlinks to ease the work of
distro maintainers or system administrators. (See
<xref linkend='ch-config-udev-device-node-creation'/> for details.)
If the host kernel supports &devtmpfs;, we can simply mount a
&devtmpfs; at <filename class='directory'>$LFS/dev</filename> and rely
on the kernel to populate it (the LFS building process does not need
the additional work onto &devtmpfs; by udev daemon).</para>
<para>But, some host kernels may lack &devtmpfs; support and these
host distros maintain the content of
<filename class="directory">/dev</filename> with different methods.
So the only host-agnostic way for populating
<filename class="directory">$LFS/dev</filename> is
bind mounting the host system's
<filename class="directory">/dev</filename> directory. A bind mount is
a special type of mount that allows you to create a mirror of a
directory or mount point to some other location. Use the following
command to achieve this:</para>
directory or mount point at some other location. Use the following
command to do this:</para>
<screen><userinput>mount -v --bind /dev $LFS/dev</userinput></screen>
@ -89,8 +102,15 @@ mount -vt tmpfs tmpfs $LFS/run</userinput></screen>
The /run tmpfs was mounted above so in this case only a
directory needs to be created.</para>
<para>In other host systems <filename>/dev/shm</filename> is a mount point
for a tmpfs. In that case the mount of /dev above will only create
/dev/shm as a directory in the chroot environment. In this situation
we must explicitly mount a tmpfs:</para>
<screen><userinput>if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)
else
mount -t tmpfs -o nosuid,nodev tmpfs $LFS/dev/shm
fi</userinput></screen>
</sect2>

View File

@ -40,12 +40,13 @@
<sect2 role="installation">
<title>Installation of Autoconf</title>
<!--
<para>First, apply a patch fixes several problems that occur with the latest
perl, libtool, and bash versions.</para>
<screen><userinput remap="pre">patch -Np1 -i ../&autoconf-fixes-patch;</userinput></screen>
-->
<para>First, fix several problems with the tests caused by bash-5.2 and later:</para>
<screen><userinput remap="pre">sed -e 's/SECONDS|/&amp;SHLVL|/' \
-e '/BASH_ARGV=/a\ /^SHLVL=/ d' \
-i.orig tests/local.at</userinput></screen>
<para>Prepare Autoconf for compilation:</para>
<screen><userinput remap="configure">./configure --prefix=/usr</userinput></screen>

View File

@ -178,16 +178,16 @@ cd build</userinput></screen>
<screen><userinput remap="test">make -k check</userinput></screen>
<para>Twelve tests fail in the <command>gold</command> testsuite when the
<para>Twelve tests fail in the <command>gold</command> testsuite when the
<option>--enable-default-pie</option> and
<option>--enable-default-ssp</option> options are passed to GCC. There
is also a known failure in the <command>as</command> tests.</para>
<!-- Fixed in 2.39
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=01ae03b
<para>One gold test, <filename>pr17704a_test</filename>, is known to
fail if <parameter>CONFIG_IA32_EMULATION</parameter> is disabled in the
kernel configuration of the host system.</para>
-->
<option>--enable-default-ssp</option> options are passed to GCC.
<!-- Caused by egrep deprecation. Note that we don't "patch" temp grep.
And it seems unworthy to add a sed into temp grep just for one test
failure. (I don't really agree to "patch" grep in the first place,
anyway.) -->
The test named <filename>ar with versioned solib</filename> is also
known to fail.</para>
<para>Install the package:</para>
<screen><userinput remap="install">make tooldir=/usr install</userinput></screen>

View File

@ -16,10 +16,10 @@
<para>There are also several files installed in the /usr/lib and /usr/libexec
directories with a file name extension of .la. These are "libtool archive"
files. As already said, they are only useful when linking with static
libraries. They are unneeded, and potentially harmful, when using dynamic
shared libraries, specially when using also non-autotools build systems.
To remove them, run:</para>
files. As already said, on a modern Linux system the libtool .la files are
only useful for libltdl. No libraries in LFS are expected to be loaded
by libltdl, and it's known that some .la files can cause BLFS packages
fail to build. Remove those files now:</para>
<screen><userinput>find /usr/lib /usr/libexec -name \*.la -delete</userinput><userinput arch="ml_32,ml_all">
find /usr/lib32 -name \*.la -delete</userinput><userinput arch="ml_x32,ml_all">

View File

@ -46,7 +46,7 @@
<para>Now fix a programming error identified upstream:</para>
<screen><userinput remap="pre">sed -i -i '241i UPREF(m);' interpret.h</userinput></screen>
<screen><userinput remap="pre">sed -i '241i UPREF(m);' interpret.h</userinput></screen>
<para>Prepare Gawk for compilation:</para>

View File

@ -132,7 +132,7 @@ cd build</userinput></screen>
PIE (position-independent executable) is a technique to produce
binary programs that can be loaded anywhere in memory. Without PIE,
the security feature named ASLR (Address Space Layout Randomization)
can be applied for the shared libraries, but not the exectutable
can be applied for the shared libraries, but not the executable
itself. Enabling PIE allows ASLR for the executables in addition to
the shared libraries, and mitigates some attacks based on fixed
addresses of sensitive code or data in the executables.

View File

@ -58,7 +58,8 @@
<screen><userinput remap="test">make check</userinput></screen>
<!-- <para>One test, run-elfclassify.sh, is known to fail.</para>-->
<para>One test named <filename>run-low_high_pc.sh</filename> is known to
fail on 32-bit x86 system.</para>
<para>Install only Libelf:</para>

View File

@ -11,13 +11,13 @@
<title>Package Management</title>
<para>Package Management is an often requested addition to the LFS Book. A
Package Manager allows tracking the installation of files making it easy to
Package Manager tracks the installation of files, making it easier to
remove and upgrade packages. As well as the binary and library files, a
package manager will handle the installation of configuration files. Before
you begin to wonder, NO&mdash;this section will not talk about nor recommend
any particular package manager. What it provides is a roundup of the more
popular techniques and how they work. The perfect package manager for you may
be among these techniques or may be a combination of two or more of these
be among these techniques, or it may be a combination of two or more of these
techniques. This section briefly mentions issues that may arise when upgrading
packages.</para>
@ -32,14 +32,14 @@
<listitem>
<para>There are multiple solutions for package management, each having
its strengths and drawbacks. Including one that satisfies all audiences
its strengths and drawbacks. Finding one solution that satisfies all audiences
is difficult.</para>
</listitem>
</itemizedlist>
<para>There are some hints written on the topic of package management. Visit
the <ulink url="&hints-root;">Hints Project</ulink> and see if one of them
fits your need.</para>
fits your needs.</para>
<sect2 id='pkgmgmt-upgrade-issues'>
<title>Upgrade Issues</title>
@ -51,18 +51,18 @@
<itemizedlist>
<listitem>
<para>If Linux kernel needs to be upgraded (for example, from
5.10.17 to 5.10.18 or 5.11.1), nothing else need to be rebuilt.
The system will keep working fine thanks to the well-defined border
between kernel and userspace. Specifically, Linux API headers
need not to be (and should not be, see the next item) upgraded
alongside the kernel. You'll need to reboot your system to use the
<para>If the Linux kernel needs to be upgraded (for example, from
5.10.17 to 5.10.18 or 5.11.1), nothing else needs to be rebuilt.
The system will keep working fine thanks to the well-defined interface
between the kernel and user space. Specifically, Linux API headers
need not be (and should not be, see the next item) upgraded
along with the kernel. You will merely need to reboot your system to use the
upgraded kernel.</para>
</listitem>
<listitem>
<para>If Linux API headers or Glibc needs to be upgraded to a newer
version, (e.g. from glibc-2.31 to glibc-2.32), it is safer to
<para>If Linux API headers or glibc need to be upgraded to a newer
version, (e.g., from glibc-2.31 to glibc-2.32), it is safer to
rebuild LFS. Though you <emphasis>may</emphasis> be able to rebuild
all the packages in their dependency order, we do not recommend
it. </para>
@ -70,44 +70,44 @@
<listitem> <para>If a package containing a shared library is updated, and
if the name of the library changes, then any packages dynamically
linked to the library need to be recompiled in order to link against the
linked to the library must be recompiled, to link against the
newer library. (Note that there is no correlation between the package
version and the name of the library.) For example, consider a package
foo-1.2.3 that installs a shared library with name <filename
class='libraryfile'>libfoo.so.1</filename>. If you upgrade the package to
a newer version foo-1.2.4 that installs a shared library with name
foo-1.2.3 that installs a shared library with the name <filename
class='libraryfile'>libfoo.so.1</filename>. Suppose you upgrade the package to
a newer version foo-1.2.4 that installs a shared library with the name
<filename class='libraryfile'>libfoo.so.2</filename>. In this case, any
packages that are dynamically linked to <filename
class='libraryfile'>libfoo.so.1</filename> need to be recompiled to link
against <filename class='libraryfile'>libfoo.so.2</filename> in order to
use the new library version. You should not remove the previous
libraries unless all the dependent packages are recompiled.</para>
use the new library version. You should not remove the old
libraries until all the dependent packages have been recompiled.</para>
</listitem>
<listitem> <para>If a package containing a shared library is updated,
and the name of library doesn't change, but the version number of the
and the name of the library doesn't change, but the version number of the
library <emphasis role="bold">file</emphasis> decreases (for example,
the name of the library is kept named
the library is still named
<filename class='libraryfile'>libfoo.so.1</filename>,
but the name of library file is changed from
but the name of the library file is changed from
<filename class='libraryfile'>libfoo.so.1.25</filename> to
<filename class='libraryfile'>libfoo.so.1.24</filename>),
you should remove the library file from the previously installed version
(<filename class='libraryfile'>libfoo.so.1.25</filename> in the case).
Or, a <command>ldconfig</command> run (by yourself using a command
(<filename class='libraryfile'>libfoo.so.1.25</filename> in this case).
Otherwise, a <command>ldconfig</command> command (invokeed by yourself from the command
line, or by the installation of some package) will reset the symlink
<filename class='libraryfile'>libfoo.so.1</filename> to point to
the old library file because it seems having a <quote>newer</quote>
version, as its version number is larger. This situation may happen if
you have to downgrade a package, or the package changes the versioning
scheme of library files suddenly.</para> </listitem>
the old library file because it seems to be a <quote>newer</quote>
version; its version number is larger. This situation may arise if
you have to downgrade a package, or if the authors change the versioning
scheme for library files.</para> </listitem>
<listitem><para>If a package containing a shared library is updated,
and the name of library doesn't change, but a severe issue
and the name of the library doesn't change, but a severe issue
(especially, a security vulnerability) is fixed, all running programs
linked to the shared library should be restarted. The following
command, run as <systemitem class="username">root</systemitem> after
updating, will list what is using the old versions of those libraries
the update is cmplete, will list which processes are using the old versions of those libraries
(replace <replaceable>libfoo</replaceable> with the name of the
library):</para>
@ -115,33 +115,33 @@
tr -cd 0-9\\n | xargs -r ps u</userinput></screen>
<para>
If <application>OpenSSH</application> is being used for accessing
the system and it is linked to the updated library, you need to
restart <command>sshd</command> service, then logout, login again,
and rerun that command to confirm nothing is still using the
If <application>OpenSSH</application> is being used to access
the system and it is linked to the updated library, you must
restart the <command>sshd</command> service, then logout, login again,
and rerun the preceding ps command to confirm that nothing is still using the
deleted libraries.
</para>
<para revision='systemd'>
If the <command>systemd</command> daemon (running as PID 1) is
linked to the updated library, you can restart it without reboot
linked to the updated library, you can restart it without rebooting
by running <command>systemctl daemon-reexec</command> as the
<systemitem class='username'>root</systemitem> user.
</para></listitem>
<listitem>
<para>If a binary or a shared library is overwritten, the processes
using the code or data in the binary or library may crash. The
correct way to update a binary or a shared library without causing
<para>If an executable program or a shared library is overwritten, the processes
using the code or data in that program or library may crash. The
correct way to update a program or a shared library without causing
the process to crash is to remove it first, then install the new
version into position. The <command>install</command> command
provided by <application>Coreutils</application> has already
implemented this and most packages use it to install binaries and
version. The <command>install</command> command
provided by <application>coreutils</application> has already
implemented this, and most packages use that command to install binary files and
libraries. This means that you won't be troubled by this issue most of the time.
However, the install process of some packages (notably Mozilla JS
in BLFS) just overwrites the file if it exists and causes a crash, so
in BLFS) just overwrites the file if it exists; this causes a crash. So
it's safer to save your work and close unneeded running processes
before updating a package.</para>
before updating a package.</para> <!-- binary is an adjective, not a noun. -->
</listitem>
</itemizedlist>
@ -152,36 +152,36 @@
<para>The following are some common package management techniques. Before
making a decision on a package manager, do some research on the various
techniques, particularly the drawbacks of the particular scheme.</para>
techniques, particularly the drawbacks of each particular scheme.</para>
<sect3>
<title>It is All in My Head!</title>
<para>Yes, this is a package management technique. Some folks do not find
the need for a package manager because they know the packages intimately
and know what files are installed by each package. Some users also do not
<para>Yes, this is a package management technique. Some folks do not
need a package manager because they know the packages intimately
and know which files are installed by each package. Some users also do not
need any package management because they plan on rebuilding the entire
system when a package is changed.</para>
system whenever a package is changed.</para>
</sect3>
<sect3>
<title>Install in Separate Directories</title>
<para>This is a simplistic package management that does not need any extra
package to manage the installations. Each package is installed in a
<para>This is a simplistic package management technique that does not need a
special program to manage the packages. Each package is installed in a
separate directory. For example, package foo-1.1 is installed in
<filename class='directory'>/usr/pkg/foo-1.1</filename>
and a symlink is made from <filename>/usr/pkg/foo</filename> to
<filename class='directory'>/usr/pkg/foo-1.1</filename>. When installing
a new version foo-1.2, it is installed in
<filename class='directory'>/usr/pkg/foo-1.1</filename>. When
a new version foo-1.2 comes along, it is installed in
<filename class='directory'>/usr/pkg/foo-1.2</filename> and the previous
symlink is replaced by a symlink to the new version.</para>
<para>Environment variables such as <envar>PATH</envar>,
<envar>LD_LIBRARY_PATH</envar>, <envar>MANPATH</envar>,
<envar>INFOPATH</envar> and <envar>CPPFLAGS</envar> need to be expanded to
include <filename>/usr/pkg/foo</filename>. For more than a few packages,
include <filename>/usr/pkg/foo</filename>. If you install more than a few packages,
this scheme becomes unmanageable.</para>
</sect3>
@ -190,15 +190,15 @@
<title>Symlink Style Package Management</title>
<para>This is a variation of the previous package management technique.
Each package is installed similar to the previous scheme. But instead of
making the symlink, each file is symlinked into the
Each package is installed as in the previous scheme. But instead of
making the symlink via a generic package name, each file is symlinked into the
<filename class='directory'>/usr</filename> hierarchy. This removes the
need to expand the environment variables. Though the symlinks can be
created by the user to automate the creation, many package managers have
been written using this approach. A few of the popular ones include Stow,
created by the user, many package managers use this approach, and
automate the creation of the symlinks. A few of the popular ones include Stow,
Epkg, Graft, and Depot.</para>
<para>The installation needs to be faked, so that the package thinks that
<para>The installation script needs to be fooled, so the package thinks
it is installed in <filename class="directory">/usr</filename> though in
reality it is installed in the
<filename class="directory">/usr/pkg</filename> hierarchy. Installing in
@ -216,7 +216,7 @@ make install</userinput></screen>
<filename class='libraryfile'>/usr/pkg/libfoo/1.1/lib/libfoo.so.1</filename>
instead of <filename class='libraryfile'>/usr/lib/libfoo.so.1</filename>
as you would expect. The correct approach is to use the
<envar>DESTDIR</envar> strategy to fake installation of the package. This
<envar>DESTDIR</envar> variable to direct the installation. This
approach works as follows:</para>
<screen role="nodump"><userinput>./configure --prefix=/usr
@ -224,8 +224,8 @@ make
make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
<para>Most packages support this approach, but there are some which do not.
For the non-compliant packages, you may either need to manually install the
package, or you may find that it is easier to install some problematic
For the non-compliant packages, you may either need to install the
package manually, or you may find that it is easier to install some problematic
packages into <filename class='directory'>/opt</filename>.</para>
</sect3>
@ -237,14 +237,14 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
the package. After the installation, a simple use of the
<command>find</command> command with the appropriate options can generate
a log of all the files installed after the timestamp file was created. A
package manager written with this approach is install-log.</para>
package manager that uses this approach is install-log.</para>
<para>Though this scheme has the advantage of being simple, it has two
drawbacks. If, during installation, the files are installed with any
timestamp other than the current time, those files will not be tracked by
the package manager. Also, this scheme can only be used when one package
is installed at a time. The logs are not reliable if two packages are
being installed on two different consoles.</para>
the package manager. Also, this scheme can only be used when packages
are installed one at a time. The logs are not reliable if two packages are
installed simultaneously from two different consoles.</para>
</sect3>
@ -262,12 +262,12 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
calls that modify the filesystem. For this approach to work, all the
executables need to be dynamically linked without the suid or sgid bit.
Preloading the library may cause some unwanted side-effects during
installation. Therefore, it is advised that one performs some tests to
ensure that the package manager does not break anything and logs all the
installation. Therefore, it's a good idea to perform some tests to
ensure that the package manager does not break anything, and that it logs all the
appropriate files.</para>
<para>The second technique is to use <command>strace</command>, which
logs all system calls made during the execution of the installation
<para>Another technique is to use <command>strace</command>, which
logs all the system calls made during the execution of the installation
scripts.</para>
</sect3>
@ -275,10 +275,10 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
<title>Creating Package Archives</title>
<para>In this scheme, the package installation is faked into a separate
tree as described in the Symlink style package management. After the
tree as previously described in the symlink style package management section. After the
installation, a package archive is created using the installed files.
This archive is then used to install the package either on the local
machine or can even be used to install the package on other machines.</para>
This archive is then used to install the package on the local
machine or even on other machines.</para>
<para>This approach is used by most of the package managers found in the
commercial distributions. Examples of package managers that follow this
@ -289,10 +289,10 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
package management for LFS systems is located at <ulink
url="&hints-root;fakeroot.txt"/>.</para>
<para>Creation of package files that include dependency information is
complex and is beyond the scope of LFS.</para>
<para>The creation of package files that include dependency information is
complex, and beyond the scope of LFS.</para>
<para>Slackware uses a <command>tar</command> based system for package
<para>Slackware uses a <command>tar</command>-based system for package
archives. This system purposely does not handle package dependencies
as more complex package managers do. For details of Slackware package
management, see <ulink
@ -322,8 +322,8 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
another computer with the same architecture as the base system is as
simple as using <command>tar</command> on the LFS partition that contains
the root directory (about 250MB uncompressed for a base LFS build), copying
that file via network transfer or CD-ROM to the new system and expanding
it. From that point, a few configuration files will have to be changed.
that file via network transfer or CD-ROM / USB stick to the new system, and expanding
it. After that, a few configuration files will have to be changed.
Configuration files that may need to be updated include:
<filename>/etc/hosts</filename>,
<filename>/etc/fstab</filename>,
@ -342,17 +342,17 @@ make DESTDIR=/usr/pkg/libfoo/1.1 install</userinput></screen>
</phrase>
</para>
<para>A custom kernel may need to be built for the new system depending on
<para>A custom kernel may be needed for the new system, depending on
differences in system hardware and the original kernel
configuration.</para>
<note><para>There have been some reports of issues when copying between
similar but not identical architectures. For instance, the instruction set
for an Intel system is not identical with an AMD processor and later
versions of some processors may have instructions that are unavailable in
for an Intel system is not identical with the AMD processor's instructions, and later
versions of some processors may provide instructions that are unavailable with
earlier versions.</para></note>
<para>Finally the new system has to be made bootable via <xref
<para>Finally, the new system has to be made bootable via <xref
linkend="ch-bootable-grub"/>.</para>
</sect2>

View File

@ -46,7 +46,7 @@ EOF</userinput></screen>
</sect2>
<sect2>
<sect2 id='systemd-no-tmpfs'>
<title>Disabling tmpfs for /tmp</title>
<para>By default, <filename class="directory">/tmp</filename> is created as

View File

@ -93,7 +93,7 @@
</sect3>
<sect3>
<sect3 id='ch-config-udev-device-node-creation'>
<title>Device Node Creation</title>
<para>Device files are created by the kernel by the <systemitem

View File

@ -32,6 +32,7 @@ sysfs /sys sysfs nosuid,noexec,nodev 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /run tmpfs defaults 0 0
devtmpfs /dev devtmpfs mode=0755,nosuid 0 0
tmpfs /dev/shm tmpfs nosuid,nodev 0 0
# End /etc/fstab</literal>
EOF</userinput></screen>

View File

@ -117,7 +117,9 @@ General architecture-dependent options ---&gt;
Device Drivers ---&gt;
Graphics support ---&gt;
Frame buffer Devices ---&gt;
[*] Support for frame buffer devices ----
&lt;*&gt; Support for frame buffer devices ---&gt;
Console display driver support ---&gt;
[*] Framebuffer Console support [CONFIG_FRAMEBUFFER_CONSOLE]
Generic Driver Options ---&gt;
[ ] Support for uevent helper [CONFIG_UEVENT_HELPER]
[*] Maintain a devtmpfs filesystem to mount at /dev [CONFIG_DEVTMPFS]
@ -156,6 +158,8 @@ Device Drivers ---&gt;
Graphics support ---&gt;
Frame buffer Devices ---&gt;
&lt;*&gt; Support for frame buffer devices ---&gt;
Console display driver support ---&gt;
[*] Framebuffer Console support [CONFIG_FRAMEBUFFER_CONSOLE]
File systems ---&gt;
[*] Inotify support for userspace [CONFIG_INOTIFY_USER]
Pseudo filesystems ---&gt;
@ -301,6 +305,20 @@ Device Drivers ---&gt;
</listitem>
</varlistentry>
<varlistentry>
<term><parameter>Framebuffer Console support</parameter></term>
<listitem>
<para>This is needed to display the Linux console on a frame
buffer device. To allow the kernel to print debug messages at an
early boot stage, it shouldn't be built as a kernel module
unless an initramfs will be used. And, if
<option>CONFIG_DRM</option> (Direct Rendering Manager) is enabled,
it's likely <option>CONFIG_DRM_FBDEV_EMULATION</option> (Enable
legacy fbdev support for your modesetting driver) should be
enabled as well.</para>
</listitem>
</varlistentry>
<varlistentry>
<term><parameter>Support x2apic</parameter></term>
<listitem>
@ -349,12 +367,18 @@ Device Drivers ---&gt;
the <filename class="directory">/boot</filename> directory.</para>
<caution>
<para>If the host system has a separate /boot partition, the files copied
below should go there. The easiest way to do that is to bind /boot on the
host (outside chroot) to /mnt/lfs/boot before proceeding. As the
&root; user in the <emphasis>host system</emphasis>:</para>
<para>If you've decided to use a separate &boot-dir; partition for the
LFS system (maybe sharing a &boot-dir; partition with the host
distro) , the files copied below should go there. The easiest way to
do that is to create the entry for &boot-dir; in &fstab; first (read
the previous section for details), then issue the following command
as the &root; user in the
<emphasis>chroot environment</emphasis>:</para>
<screen role="nodump"><userinput>mount --bind /boot /mnt/lfs/boot</userinput></screen>
<screen role="nodump"><userinput>mount /boot</userinput></screen>
<para>The path to the device node is omitted in the command because
<command>mount</command> can read it from &fstab;.</para>
</caution>
<para>The path to the kernel image may vary depending on the platform being

266
chapter11/afterlfs.xml Normal file
View File

@ -0,0 +1,266 @@
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE sect1 PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN"
"http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
<!ENTITY % general-entities SYSTEM "../general.ent">
%general-entities;
]>
<sect1 id="afterlfs">
<?dbhtml filename="afterlfs.html"?>
<title>Getting Started After LFS</title>
<sect2>
<title>Deciding what to do next</title>
<para>
Now that LFS is complete and you have a bootable system, what do you do?
The next step is to decide how to use it. Generally, there are two broad
categories to consider: workstation or server. Indeed, these categories
are not mutually exclusive. The applications needed for each category
can be combined onto a single system, but let's look at them separately
for now.
</para>
<para>
A server is the simpler category. Generally this consists of a web
server such as the
<ulink url="&blfs-book;server/apache.html">Apache HTTP Server</ulink>
and a database server such as
<ulink url="&blfs-book;server/mariadb.html">MariaDB</ulink>.
However other services are possible. The operating system
embedded in a single use device falls into this category.
</para>
<para>
On the other hand, a workstation is much more complex. It generally
requires a graphical user environment such as
<ulink url="&blfs-book;lxde/lxde.html">LXDE</ulink>,
<ulink url="&blfs-book;xfce/xfce.html">XFCE</ulink>,
<ulink url="&blfs-book;kde/kde.html">KDE</ulink>, or
<ulink url="&blfs-book;gnome/gnome.html">Gnome</ulink>
based on a basic
<ulink url="&blfs-book;x/installing.html">graphical environment</ulink>
and several graphical based applications such as the
<ulink url="&blfs-book;xsoft/firefox.html">Firefox web browser</ulink>,
<ulink url="&blfs-book;xsoft/thunderbird.html">Thunderbird email client</ulink>,
or
<ulink url="&blfs-book;xsoft/libreoffice.html">LibreOffice office suite</ulink>.
These applications require many (several hundred depending on
desired capabilities) more packages of support applications and
libraries.
</para>
<para>
In addition to the above, there is a set of applications for system
management for all kinds of systems. These applications are all in the
BLFS book. Not all packages are needed in every environments. For
example <ulink url="&blfs-book;basicnet/dhcpcd.html">dhcpcd</ulink>, is
not normally appropriate for a server and <ulink
url="&blfs-book;basicnet/wireless_tools.html">wireless_tools</ulink>,
are normally only useful for a laptop system.
</para>
</sect2>
<sect2>
<title>Working in a basic LFS environment</title>
<para>
When you initially boot into LFS, you have all the internal tools to build
additional packages. Unfortunately, the user environment is quite sparse.
There are a couple of ways to improve this:
</para>
<sect3>
<title>Work from the LFS host in chroot</title>
<para>
This method provides a complete graphical environment where a full
featured browser and copy/paste capabilities are available. This method
allows using applications like the host's version of wget to download
package sources to a location available when working in the chroot
envirnment.
</para>
<para>
In order to properly build packages in chroot, you will also need to
remember to mount the virtual file systems if they are not already
mounted. One way to do this is to create a script on the
<emphasis role="bold">HOST</emphasis> system:
</para>
<screen><command>cat &gt; ~/mount-virt.sh &lt;&lt; "EOF"
#!/bin/bash
function mountbind
{
if ! mountpoint $LFS/$1 >/dev/null; then
$SUDO mount --bind /$1 $LFS/$1
echo $LFS/$1 mounted
else
echo $LFS/$1 already mounted
fi
}
function mounttype
{
if ! mountpoint $LFS/$1 >/dev/null; then
$SUDO mount -t $2 $3 $4 $5 $LFS/$1
echo $LFS/$1 mounted
else
echo $LFS/$1 already mounted
fi
}
if [ $EUID -ne 0 ]; then
SUDO=sudo
else
SUDO=""
fi
if [ x$LFS == x ]; then
echo "LFS not set"
exit 1
fi
mountbind dev
mounttype dev/pts devpts devpts -o gid=5,mode=620
mounttype proc proc proc
mounttype sys sysfs sysfs
mounttype run tmpfs run
if [ -h $LFS/dev/shm ]; then
mkdir -pv $LFS/$(readlink $LFS/dev/shm)
else
mounttype dev/shm tmpfs tmpfs -o nosuid,nodev
fi
#mountbind usr/src
#mountbind boot
#mountbind home
EOF</command></screen>
<para>
Note that the last three commands in the script are commented out. These
are useful if those directories are mounted as separate partitions on the
host system and will be mounted when booting the completed LFS/BLFS system.
</para>
<para>
The script can be run with <command>bash ~/mount-virt.sh</command> as
either a regular user (recommended) or as &root;. If run as a regular
user, sudo is required on the host system.
</para>
<para>
Another issue pointed out by the script is where to store downloaded
package files. This location is arbitrary. It can be in a regular
user's home directory such as ~/sources or in a global location like
/usr/src. Our recommendation is not to mix BLFS sources and LFS sources
in (from the chroot environment) /sources. In any case, the packages
must be accessible inside the chroot environment.
</para>
<para>
A last convenience feature presented here is to streamline the process
of entering the chroot environment. This can be done with an alias
placed in a user's ~/.bashrc file on the host system:
</para>
<screen><command>alias lfs='sudo /usr/sbin/chroot /mnt/lfs /usr/bin/env -i HOME=/root TERM="$TERM" PS1="\u:\w\\\\$ "
PATH=/bin:/usr/bin:/sbin:/usr/sbin /bin/bash --login'</command></screen>
<para>
This alias is a little tricky because of the quoting and levels of
backslash characters. It must be all on a single line. The above command
has been split in two for presentation purposes.
</para>
</sect3>
<sect3>
<title>Work remotely via ssh</title>
<para>
This method also provides a full graphical environment, but first
requires installing
<ulink url="&blfs-book;postlfs/openssh.html">sshd</ulink> and
<ulink url="&blfs-book;basicnet/wget.html">wget</ulink>
on the LFS system, usually in chroot. It also requires a second
computer. This method has the advantage of being simple by not requiring
the complexity of the chroot environment. It also uses your LFS built
kernel for all additional packages and still provides a complete system
for installing packages.
</para>
</sect3>
<sect3>
<title>Work from the LFS command line</title>
<para>
This method requires installing
<ulink url="&blfs-book;general/libtasn1.html">libtasn1</ulink>,
<ulink url="&blfs-book;postlfs/p11-kit.html">p11-kit</ulink>,
<ulink url="&blfs-book;postlfs/make-ca.html">make-ca</ulink>,
<ulink url="&blfs-book;basicnet/wget.html">wget</ulink>,
<ulink url="&blfs-book;general/gpm.html">gpm</ulink>, and
<ulink url="&blfs-book;basicnet/links.html">links</ulink>
(or <ulink url="&blfs-book;basicnet/lynx.html">lynx</ulink>)
in chroot and then rebooting into the new LFS system. At this
point the default system has six virtual consoles. Switching
consoles is as easy as using the
<keycombo>
<keycap>Alt</keycap>
<keycap>Fx</keycap>
</keycombo>
key combinations where <keycap>Fx</keycap> is
between <keycap>F1</keycap> and <keycap>F6</keycap>.
The
<keycombo>
<keycap>Alt</keycap>
<keycap function='left'/>
</keycombo>
and
<keycombo>
<keycap>Alt</keycap>
<keycap function='right'/>
</keycombo>
combinations also will change the console.
</para>
<para>
At this point you can log into two different virtual consoles and run
the links or lynx browser in one console and bash in the other. GPM
then allows copying commands from the browser with the left mouse
button, switching consoles, and pasting into the other console.
</para>
<note>
<para>
As a side note, switching of virtual consoles can also be done from
an X Window instance with the
<keycombo>
<keycap>Ctrl</keycap>
<keycap>Alt</keycap>
<keycap>Fx</keycap>
</keycombo>
key combination, but the mouse copy operation does not work
between the graphical interface and a virtual console. You can
return to the X Window display with the
<keycombo>
<keycap>Ctrl</keycap>
<keycap>Alt</keycap>
<keycap>Fx</keycap>
</keycombo>
combination, where <keycap>Fx</keycap> is usually
<keycap>F1</keycap> but may be <keycap>F7</keycap>.
</para>
</note>
</sect3>
</sect2>
</sect1>

View File

@ -15,5 +15,6 @@
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="getcounted.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="reboot.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="whatnow.xml"/>
<xi:include xmlns:xi="http://www.w3.org/2001/XInclude" href="afterlfs.xml"/>
</chapter>

View File

@ -9,17 +9,22 @@
<?dbhtml filename="reboot.html"?>
<title>Rebooting the System</title>
<para>
Now that all of the software has been installed, it is time to reboot
your computer. However, there are still a few things to check.
Here are some suggestions:</para>
<para>Now that all of the software has been installed, it is time to reboot
your computer. However, you should be aware of a few things. The system you
<!--
The system you
have created in this book is quite minimal, and most likely will not have
the functionality you would need to be able to continue forward. By installing
a few extra packages from the BLFS book while still in our current chroot
environment, you can leave yourself in a much better position to continue on
once you reboot into your new LFS installation. Here are some suggestions:</para>
-->
<itemizedlist>
<!--
<listitem><para>A text mode browser such as <ulink
url='&blfs-book;basicnet/lynx.html'>Lynx</ulink>
will allow you to easily view the BLFS book in one virtual terminal, while
@ -60,14 +65,21 @@
install <ulink
url='&blfs-book;basicnet/wpa_supplicant.html'>wpa_supplicant</ulink>.
</para></listitem>
-->
<listitem>
<para>
Install any <ulink
url='&blfs-book;postlfs/firmware.html'>firmware</ulink> needed if the
kernel driver for your hardware requires some firmware files to function
properly.
</para>
</listitem>
<listitem><para>Install <ulink
url='&blfs-book;postlfs/firmware.html'>firmwares</ulink> if the kernel
driver for your hardware require some firmware to function properly.
</para></listitem>
<listitem><para>Finally, a review of the following configuration files
is also appropriate at this point.</para>
<listitem>
<para>
A review of the following configuration files
is also appropriate at this point.
</para>
<itemizedlist>
<listitem><para>/etc/bashrc </para></listitem>
@ -86,14 +98,11 @@
</itemizedlist>
<para>Now that we have said that, let's move on to booting our shiny new LFS
installation for the first time! First exit from the chroot environment:</para>
installation for the first time! <emphasis>First exit from the chroot
environment</emphasis>:</para>
<screen><userinput>logout</userinput></screen>
<!-- We need to show the user the details...
<para>Unmount the LFS file system hierarchy:</para>
<screen><userinput>umount -Rv $LFS</userinput></screen>
-->
<!-- We need to show the user the details...-->
<para>Then unmount the virtual file systems:</para>
@ -106,17 +115,19 @@ umount -v $LFS/sys</userinput></screen>
<para>If multiple partitions were created, unmount the other
partitions before unmounting the main one, like this:</para>
<screen role="nodump"><userinput>umount -v $LFS/usr
umount -v $LFS/home
<screen role="nodump"><userinput>umount -v $LFS/home
umount -v $LFS</userinput></screen>
<para>Unmount the LFS file system itself:</para>
<screen role="nodump"><userinput>umount -v $LFS</userinput></screen>
<para>Now, reboot the system with:</para>
<para>Now, reboot the system.</para>
<screen role="nodump"><userinput>shutdown -r now</userinput></screen>
<!-- Commented out because we don't have a host system requirement on
its init, and different init system may recommend different commands
for reboot. -->
<!--<screen role="nodump"><userinput>shutdown -r now</userinput></screen>-->
<para>Assuming the GRUB boot loader was set up as outlined earlier, the menu
is set to boot <emphasis>LFS &version;</emphasis> automatically.</para>

View File

@ -60,8 +60,7 @@ PRETTY_NAME="Linux From Scratch &version;"
VERSION_CODENAME="&lt;your name here&gt;"
EOF</userinput></screen>
<para>Be sure to put some sort of customization for the fields
'DISTRIB_CODENAME' and 'VERSION_CODENAME' to make the system uniquely
yours.</para>
<para>Be sure to customize the fields 'DISTRIB_CODENAME' and
'VERSION_CODENAME' to make the system uniquely yours.</para>
</sect1>

View File

@ -8,7 +8,7 @@
<sect1 id="ch-finish-whatnow">
<?dbhtml filename="whatnow.html"?>
<title>What Now?</title>
<title>Additional Resources</title>
<para>Thank you for reading this LFS book. We hope that you have
found this book helpful and have learned more about the system
@ -38,7 +38,8 @@
</listitem>
<listitem>
<para><ulink url="https://seclists.org/oss-sec/">Open Source Security Mailing List</ulink></para>
<para><ulink url="https://seclists.org/oss-sec/">Open Source Security
Mailing List</ulink></para>
<para>This is a mailing list for discussion of security flaws,
concepts, and practices in the Open Source community.</para>
@ -46,7 +47,7 @@
</itemizedlist>
</listitem>
<!--
<listitem>
<para>Beyond Linux From Scratch</para>
@ -55,7 +56,7 @@
Book. The BLFS project is located at <ulink url="&blfs-book;"/>.
</para>
</listitem>
-->
<listitem>
<para>LFS Hints</para>

View File

@ -121,6 +121,12 @@
<!ENTITY root "<systemitem class='username'>root</systemitem>">
<!ENTITY lfs-user "<systemitem class='username'>lfs</systemitem>">
<!ENTITY devtmpfs "<systemitem class='filesystem'>devtmpfs</systemitem>">
<!ENTITY fstab "<filename>/etc/fstab</filename>">
<!ENTITY boot-dir "<filename class='directory'>/boot</filename>">
<!ENTITY ch-final "<xref linkend='chapter-building-system'/>">
<!ENTITY ch-tmp-cross "<xref linkend='chapter-temporary-tools'/>">
<!ENTITY ch-tmp-chroot "<xref linkend='chapter-chroot-temporary-tools'/>">
<!ENTITY % packages-entities SYSTEM "packages.ent">
%packages-entities;

View File

@ -34,7 +34,7 @@ function find_max( $lines, $regex_match, $regex_replace )
// Isolate the version and put in an array
$slice = preg_replace( $regex_replace, "$1", $line );
if ( $slice == $line ) continue;
if ( strcmp( $slice, $line ) == 0 ) continue;
array_push( $a, $slice );
}
@ -266,6 +266,15 @@ if ( $package == "zstd" ) $dirpath = "https://github.com/facebook/zstd/rel
if ( $package == "elfutils" )
return find_max( $lines, "/^\d/", "/^(\d[\d\.]+\d)\/.*$/" );
if ( $package == "iana-etc" )
return find_max( $lines, "/^\s*20\d\d/", "/^\s+(\d+).*$/" );
if ( $package == "meson" )
return find_max( $lines, "/^\s+\d\./", "/^\s+([\d\.]+)$/" );
if ( $package == "shadow" )
return find_max( $lines, "/^\s+\d\./", "/^\s+([\d\.]+)$/" );
if ( $package == "XML-Parser" )
{
$max = find_max( $lines, "/$package/", "/^.*$package-([\d\._]*\d).tar.*$/" );
@ -292,6 +301,9 @@ if ( $package == "zstd" ) $dirpath = "https://github.com/facebook/zstd/rel
return str_replace( "_", ".", $max );
}
if ( $package == "libffi" )
return find_max( $lines, "/v\d/", "/^.*v([\d\.]+)$/" );
if ( $package == "procps-ng" )
return find_max( $lines, "/v\d/", "/^.*v([\d\.]+)$/" );

View File

@ -48,20 +48,20 @@
<!ENTITY automake-fin-du "116 MB">
<!ENTITY automake-fin-sbu "less than 0.1 SBU (about 7.7 SBU with tests)">
<!ENTITY bash-version "5.1.16">
<!ENTITY bash-size "10,277 KB">
<!ENTITY bash-version "5.2">
<!ENTITY bash-size "10,695 KB">
<!ENTITY bash-url "&gnu;bash/bash-&bash-version;.tar.gz">
<!ENTITY bash-md5 "c17b20a09fc38d67fb303aeb6c130b4e">
<!ENTITY bash-md5 "cfb4cf795fc239667f187b3d6b3d396f">
<!ENTITY bash-home "&gnu-software;bash/">
<!ENTITY bash-tmp-du "64 MB">
<!ENTITY bash-tmp-sbu "0.5 SBU">
<!ENTITY bash-fin-du "50 MB">
<!ENTITY bash-fin-sbu "1.4 SBU">
<!ENTITY bc-version "6.0.2">
<!ENTITY bc-version "6.0.4">
<!ENTITY bc-size "442 KB">
<!ENTITY bc-url "https://github.com/gavinhoward/bc/releases/download/&bc-version;/bc-&bc-version;.tar.xz">
<!ENTITY bc-md5 "101e62dd9c2b90bf18c38d858aa36f0d">
<!ENTITY bc-md5 "1e1c90de1a11f3499237425de1673ef1">
<!ENTITY bc-home "https://git.yzena.com/gavin/bc">
<!ENTITY bc-fin-du "7.4 MB">
<!ENTITY bc-fin-sbu "less than 0.1 SBU">
@ -114,10 +114,10 @@
<!ENTITY coreutils-fin-du "159 MB">
<!ENTITY coreutils-fin-sbu "2.8 SBU">
<!ENTITY dbus-version "1.14.0">
<!ENTITY dbus-version "1.14.2">
<!ENTITY dbus-size "1,332 KB">
<!ENTITY dbus-url "https://dbus.freedesktop.org/releases/dbus/dbus-&dbus-version;.tar.xz">
<!ENTITY dbus-md5 "ddd5570aff05191dbee8e42d751f1b7d">
<!ENTITY dbus-md5 "2d9a6b441e6f844d41c35a004f0ef50b">
<!ENTITY dbus-home "https://www.freedesktop.org/wiki/Software/dbus">
<!ENTITY dbus-fin-du "19 MB">
<!ENTITY dbus-fin-sbu "0.2 SBU">
@ -163,10 +163,10 @@
<!ENTITY eudev-fin-du "83 MB">
<!ENTITY eudev-fin-sbu "0.2 SBU">
<!ENTITY expat-version "2.4.8">
<!ENTITY expat-size "444 KB">
<!ENTITY expat-version "2.4.9">
<!ENTITY expat-size "449 KB">
<!ENTITY expat-url "&sourceforge;expat/expat-&expat-version;.tar.xz">
<!ENTITY expat-md5 "0584a7318a4c007f7ec94778799d72fe">
<!ENTITY expat-md5 "8d7fcf7d02d08bf79d9ae5c21cc72c03">
<!ENTITY expat-home "https://libexpat.github.io/">
<!ENTITY expat-fin-du "12 MB">
<!ENTITY expat-fin-sbu "0.1 SBU">
@ -317,10 +317,10 @@
<!ENTITY gzip-fin-du "21 MB">
<!ENTITY gzip-fin-sbu "0.3 SBU">
<!ENTITY iana-etc-version "20220812">
<!ENTITY iana-etc-version "20220922">
<!ENTITY iana-etc-size "584 KB">
<!ENTITY iana-etc-url "https://github.com/Mic92/iana-etc/releases/download/&iana-etc-version;/iana-etc-&iana-etc-version;.tar.gz">
<!ENTITY iana-etc-md5 "851a53efd53c77d0ad7b3d2b68d8a3fc">
<!ENTITY iana-etc-md5 "2fdc746cfc1bc10f841760fd6a92618c">
<!ENTITY iana-etc-home "https://www.iana.org/protocols">
<!ENTITY iana-etc-fin-du "4.8 MB">
<!ENTITY iana-etc-fin-sbu "less than 0.1 SBU">
@ -391,7 +391,7 @@
<!ENTITY less-fin-du "4.2 MB">
<!ENTITY less-fin-sbu "less than 0.1 SBU">
<!ENTITY lfs-bootscripts-version "20220723"> <!-- Scripts depend on this format -->
<!ENTITY lfs-bootscripts-version "20220920"> <!-- Scripts depend on this format -->
<!ENTITY lfs-bootscripts-size "BOOTSCRIPTS-SIZE KB">
<!ENTITY lfs-bootscripts-url "&downloads-root;lfs-bootscripts-&lfs-bootscripts-version;.tar.xz">
<!ENTITY lfs-bootscripts-md5 "BOOTSCRIPTS-MD5SUM">
@ -399,18 +399,18 @@
<!ENTITY lfs-bootscripts-cfg-du "BOOTSCRIPTS-INSTALL-KB KB">
<!ENTITY lfs-bootscripts-cfg-sbu "less than 0.1 SBU">
<!ENTITY libcap-version "2.65">
<!ENTITY libcap-size "176 KB">
<!ENTITY libcap-version "2.66">
<!ENTITY libcap-size "178 KB">
<!ENTITY libcap-url "&kernel;linux/libs/security/linux-privs/libcap2/libcap-&libcap-version;.tar.xz">
<!ENTITY libcap-md5 "3543e753dd941255c4def6cc67a462bb">
<!ENTITY libcap-md5 "00afd6e13bc94b2543b1a70770bdb41f">
<!ENTITY libcap-home "https://sites.google.com/site/fullycapable/">
<!ENTITY libcap-fin-du "2.7 MB">
<!ENTITY libcap-fin-sbu "less than 0.1 SBU">
<!ENTITY libffi-version "3.4.2">
<!ENTITY libffi-size "1,320 KB">
<!ENTITY libffi-version "3.4.3">
<!ENTITY libffi-size "1,327 KB">
<!ENTITY libffi-url "https://github.com/libffi/libffi/releases/download/v&libffi-version;/libffi-&libffi-version;.tar.gz">
<!ENTITY libffi-md5 "294b921e6cf9ab0fbaea4b639f8fdbe8">
<!ENTITY libffi-md5 "b57b0ac1d1072681cee9148a417bd2ec">
<!ENTITY libffi-home "https://sourceware.org/libffi/">
<!ENTITY libffi-fin-du "10 MB">
<!ENTITY libffi-fin-sbu "1.8 SBU">
@ -433,12 +433,12 @@
<!ENTITY linux-major-version "5">
<!ENTITY linux-minor-version "19">
<!ENTITY linux-patch-version "8">
<!ENTITY linux-patch-version "12">
<!--<!ENTITY linux-version "&linux-major-version;.&linux-minor-version;">-->
<!ENTITY linux-version "&linux-major-version;.&linux-minor-version;.&linux-patch-version;">
<!ENTITY linux-size "128,547 KB">
<!ENTITY linux-size "128,599 KB">
<!ENTITY linux-url "&kernel;linux/kernel/v&linux-major-version;.x/linux-&linux-version;.tar.xz">
<!ENTITY linux-md5 "ae08d14f9b7ed3d47c0d22b6d235507a">
<!ENTITY linux-md5 "6a8c953d04986027b033bc92185745bf">
<!ENTITY linux-home "https://www.kernel.org/">
<!-- measured for 5.13.4 / gcc-11.1.0 on x86_64 : minimum is
allnoconfig rounded down to allow for ongoing cleanups,
@ -611,11 +611,11 @@
<!ENTITY python-docs-md5 "d5923c417995334e72c2561812905d23">
<!ENTITY python-docs-size "7,176 KB">
<!ENTITY readline-version "8.1.2">
<!ENTITY readline-soversion "8.1"><!-- used for stripping -->
<!ENTITY readline-size "2,923 KB">
<!ENTITY readline-version "8.2">
<!ENTITY readline-soversion "8.2"><!-- used for stripping -->
<!ENTITY readline-size "2,973 KB">
<!ENTITY readline-url "&gnu;readline/readline-&readline-version;.tar.gz">
<!ENTITY readline-md5 "12819fa739a78a6172400f399ab34f81">
<!ENTITY readline-md5 "4aa1b31be779e6b84f9a96cb66bc50f6">
<!ENTITY readline-home "https://tiswww.case.edu/php/chet/readline/rltop.html">
<!ENTITY readline-fin-du "15 MB">
<!ENTITY readline-fin-sbu "0.1 SBU">
@ -703,10 +703,10 @@
<!ENTITY texinfo-fin-du "114 MB">
<!ENTITY texinfo-fin-sbu "0.6 SBU">
<!ENTITY tzdata-version "2022c">
<!ENTITY tzdata-size "423 KB">
<!ENTITY tzdata-version "2022d">
<!ENTITY tzdata-size "424 KB">
<!ENTITY tzdata-url "https://www.iana.org/time-zones/repository/releases/tzdata&tzdata-version;.tar.gz">
<!ENTITY tzdata-md5 "4e3b2369b68e713ba5d3f7456f20bfdb">
<!ENTITY tzdata-md5 "e55dbeb2121230a0ae7c58dbb47ae8c8">
<!ENTITY tzdata-home "https://www.iana.org/time-zones">
<!ENTITY udev-lfs-version "udev-lfs-20171102">

View File

@ -11,29 +11,29 @@
<title>General Compilation Instructions</title>
<para>When building packages there are several assumptions made within
the instructions:</para>
<para>Here are some things you should know about building each package:</para>
<itemizedlist>
<listitem>
<para>Several of the packages are patched before compilation, but only when
<para>Several packages are patched before compilation, but only when
the patch is needed to circumvent a problem. A patch is often needed in
both this and the following chapters, but sometimes in only one location.
both the current and the following chapters, but sometimes, when the same package
is built more than once, the patch is not needed right away.
Therefore, do not be concerned if instructions for a downloaded patch seem
to be missing. Warning messages about <emphasis>offset</emphasis> or
<emphasis>fuzz</emphasis> may also be encountered when applying a patch. Do
not worry about these warnings, as the patch was still successfully
not worry about these warnings; the patch was still successfully
applied.</para>
</listitem>
<listitem>
<para>During the compilation of most packages, there will be several
warnings that scroll by on the screen. These are normal and can safely be
ignored. These warnings are as they appear&mdash;warnings about
<para>During the compilation of most packages, some
warnings will scroll by on the screen. These are normal and can safely be
ignored. These warnings are usually about
deprecated, but not invalid, use of the C or C++ syntax. C standards change
fairly often, and some packages still use the older standard. This is not a
problem, but does prompt the warning.</para>
fairly often, and some packages have not yet been updated. This is not a
serious problem, but it does cause the warnings to appear.</para>
</listitem>
<listitem>
@ -69,25 +69,25 @@
symbolic link to <command>gawk</command>.</para></listitem>
<listitem override='bullet'><para><command>/usr/bin/yacc</command> is a
symbolic link to <command>bison</command> or a small script that
symbolic link to <command>bison</command>, or to a small script that
executes bison.</para></listitem>
</itemizedlist>
</important>
<important>
<para>To re-emphasize the build process:</para>
<para>Here is a synopsis of the build process.</para>
<orderedlist numeration="arabic" spacing="compact">
<listitem>
<para>Place all the sources and patches in a directory that will be
accessible from the chroot environment such as
accessible from the chroot environment, such as
<filename class="directory">/mnt/lfs/sources/</filename>.<!-- Do
<emphasis>not</emphasis> put sources in
<filename class="directory">/mnt/lfs/tools/</filename>. --></para>
</listitem>
<listitem>
<para>Change to the sources directory.</para>
<para>Change to the <filename class="directory">/mnt/lfs/sources/</filename> directory.</para>
</listitem>
<listitem id='buildinstr' xreflabel='Package build instructions'>
<para>For each package:</para>
@ -97,22 +97,21 @@
to be built. In <xref linkend="chapter-cross-tools"/> and
<xref linkend="chapter-temporary-tools"/>, ensure you are
the <emphasis>lfs</emphasis> user when extracting the package.</para>
<para>All methods to get the source code tree being built
in-position, except extracting the package tarball, are not
supported. Notably, using <command>cp -R</command> to copy the
<para>Do not use any method except the <command>tar</command> command
to extract the source code. Notably, using the <command>cp -R</command>
command to copy the
source code tree somewhere else can destroy links and
timestamps in the sources tree and cause building
failure.</para>
timestamps in the source tree, and cause the build to fail.</para>
</listitem>
<listitem>
<para>Change to the directory created when the package was
extracted.</para>
</listitem>
<listitem>
<para>Follow the book's instructions for building the package.</para>
<para>Follow the instructions for building the package.</para>
</listitem>
<listitem>
<para>Change back to the sources directory.</para>
<para>Change back to the sources directory when the build is complete.</para>
</listitem>
<listitem>
<para>Delete the extracted source directory unless instructed otherwise.</para>

View File

@ -10,25 +10,25 @@
<title>Introduction</title>
<para>This part is divided into three stages: first building a cross
compiler and its associated libraries; second, use this cross toolchain
<para>This part is divided into three stages: first, building a cross
compiler and its associated libraries; second, using this cross toolchain
to build several utilities in a way that isolates them from the host
distribution; third, enter the chroot environment, which further improves
host isolation, and build the remaining tools needed to build the final
distribution; and third, entering the chroot environment (which further improves
host isolation) and constructing the remaining tools needed to build the final
system.</para>
<important><para>With this part begins the real work of building a new
system. It requires much care in ensuring that the instructions are
followed exactly as the book shows them. You should try to understand
what they do, and whatever your eagerness to finish your build, you should
refrain from blindly type them as shown, but rather read documentation when
<important><para>This is where the real work of building a new system
begins. Be very careful to follow the instructions exactly as the book
shows them. You should try to understand what each command does,
and no matter how eager you are to finish your build, you should
refrain from blindly typing the commands as shown. Read the documentation when
there is something you do not understand. Also, keep track of your typing
and of the output of commands, by sending them to a file, using the
<command>tee</command> utility. This allows for better diagnosing
if something gets wrong.</para></important>
and of the output of commands, by using the <command>tee</command> utility
to send the terminal output to a file. This makes debugging easier
if something goes wrong.</para></important>
<para>The next section gives a technical introduction to the build process,
while the following one contains <emphasis role="strong">very
<para>The next section is a technical introduction to the build process,
while the following one presents <emphasis role="strong">very
important</emphasis> general instructions.</para>
</sect1>

View File

@ -11,26 +11,26 @@
<title>Toolchain Technical Notes</title>
<para>This section explains some of the rationale and technical details
behind the overall build method. It is not essential to immediately
behind the overall build method. Don't try to immediately
understand everything in this section. Most of this information will be
clearer after performing an actual build. This section can be referred
to at any time during the process.</para>
clearer after performing an actual build. Come back and re-read this chapter
at any time during the build process.</para>
<para>The overall goal of <xref linkend="chapter-cross-tools"/> and <xref
linkend="chapter-temporary-tools"/> is to produce a temporary area that
contains a known-good set of tools that can be isolated from the host system.
By using <command>chroot</command>, the commands in the remaining chapters
will be contained within that environment, ensuring a clean, trouble-free
linkend="chapter-temporary-tools"/> is to produce a temporary area
containing a set of tools that are known to be good, and that are isolated from the host system.
By using the <command>chroot</command> command, the compilations in the remaining chapters
will be isolated within that environment, ensuring a clean, trouble-free
build of the target LFS system. The build process has been designed to
minimize the risks for new readers and to provide the most educational value
minimize the risks for new readers, and to provide the most educational value
at the same time.</para>
<para>The build process is based on the process of
<para>This build process is based on
<emphasis>cross-compilation</emphasis>. Cross-compilation is normally used
for building a compiler and its toolchain for a machine different from
the one that is used for the build. This is not strictly needed for LFS,
to build a compiler and its associated toolchain for a machine different from
the one that is used for the build. This is not strictly necessary for LFS,
since the machine where the new system will run is the same as the one
used for the build. But cross-compilation has the great advantage that
used for the build. But cross-compilation has one great advantage:
anything that is cross-compiled cannot depend on the host environment.</para>
<sect2 id="cross-compile" xreflabel="About Cross-Compilation">
@ -39,47 +39,46 @@
<note>
<para>
The LFS book is not, and does not contain a general tutorial to
build a cross (or native) toolchain. Don't use the command in the
book for a cross toolchain which will be used for some purpose other
The LFS book is not (and does not contain) a general tutorial to
build a cross (or native) toolchain. Don't use the commands in the
book for a cross toolchain for some purpose other
than building LFS, unless you really understand what you are doing.
</para>
</note>
<para>Cross-compilation involves some concepts that deserve a section on
their own. Although this section may be omitted in a first reading,
coming back to it later will be beneficial to your full understanding of
<para>Cross-compilation involves some concepts that deserve a section of
their own. Although this section may be omitted on a first reading,
coming back to it later will help you gain a fuller understanding of
the process.</para>
<para>Let us first define some terms used in this context:</para>
<para>Let us first define some terms used in this context.</para>
<variablelist>
<varlistentry><term>build</term><listitem>
<varlistentry><term>The build</term><listitem>
<para>is the machine where we build programs. Note that this machine
is referred to as the <quote>host</quote> in other
sections.</para></listitem>
is also referred to as the <quote>host</quote>.</para></listitem>
</varlistentry>
<varlistentry><term>host</term><listitem>
<varlistentry><term>The host</term><listitem>
<para>is the machine/system where the built programs will run. Note
that this use of <quote>host</quote> is not the same as in other
sections.</para></listitem>
</varlistentry>
<varlistentry><term>target</term><listitem>
<varlistentry><term>The target</term><listitem>
<para>is only used for compilers. It is the machine the compiler
produces code for. It may be different from both build and
host.</para></listitem>
produces code for. It may be different from both the build and
the host.</para></listitem>
</varlistentry>
</variablelist>
<para>As an example, let us imagine the following scenario (sometimes
referred to as <quote>Canadian Cross</quote>): we may have a
referred to as <quote>Canadian Cross</quote>): we have a
compiler on a slow machine only, let's call it machine A, and the compiler
ccA. We may have also a fast machine (B), but with no compiler, and we may
want to produce code for another slow machine (C). To build a
compiler for machine C, we would have three stages:</para>
ccA. We also have a fast machine (B), but no compiler for (B), and we
want to produce code for a third, slow machine (C). We will build a
compiler for machine C in three stages.</para>
<informaltable align="center">
<tgroup cols="5">
@ -95,24 +94,24 @@
<tbody>
<row>
<entry>1</entry><entry>A</entry><entry>A</entry><entry>B</entry>
<entry>build cross-compiler cc1 using ccA on machine A</entry>
<entry>Build cross-compiler cc1 using ccA on machine A.</entry>
</row>
<row>
<entry>2</entry><entry>A</entry><entry>B</entry><entry>C</entry>
<entry>build cross-compiler cc2 using cc1 on machine A</entry>
<entry>Build cross-compiler cc2 using cc1 on machine A.</entry>
</row>
<row>
<entry>3</entry><entry>B</entry><entry>C</entry><entry>C</entry>
<entry>build compiler ccC using cc2 on machine B</entry>
<entry>Build compiler ccC using cc2 on machine B.</entry>
</row>
</tbody>
</tgroup>
</informaltable>
<para>Then, all the other programs needed by machine C can be compiled
<para>Then, all the programs needed by machine C can be compiled
using cc2 on the fast machine B. Note that unless B can run programs
produced for C, there is no way to test the built programs until machine
C itself is running. For example, for testing ccC, we may want to add a
produced for C, there is no way to test the newly built programs until machine
C itself is running. For example, to run a test suite on ccC, we may want to add a
fourth stage:</para>
<informaltable align="center">
@ -129,7 +128,7 @@
<tbody>
<row>
<entry>4</entry><entry>C</entry><entry>C</entry><entry>C</entry>
<entry>rebuild and test ccC using itself on machine C</entry>
<entry>Rebuild and test ccC using ccC on machine C.</entry>
</row>
</tbody>
</tgroup>
@ -146,44 +145,62 @@
<title>Implementation of Cross-Compilation for LFS</title>
<note>
<para>Almost all the build systems use names of the form
cpu-vendor-kernel-os referred to as the machine triplet. An astute
reader may wonder why a <quote>triplet</quote> refers to a four component
name. The reason is history: initially, three component names were enough
to designate a machine unambiguously, but with new machines and systems
appearing, that proved insufficient. The word <quote>triplet</quote>
remained. A simple way to determine your machine triplet is to run
the <command>config.guess</command>
<para>All packages involved with cross compilation in the book use an
autoconf-based building system. The autoconf-based building system
accepts system types in the form cpu-vendor-kernel-os,
referred to as the system triplet. Since the vendor field is mostly
irrelevant, autoconf allows to omit it. An astute reader may wonder
why a <quote>triplet</quote> refers to a four component name. The
reason is the kernel field and the os field originiated from one
<quote>system</quote> field. Such a three-field form is still valid
today for some systems, for example
<literal>x86_64-unknown-freebsd</literal>. But for other systems,
two systems can share the same kernel but still be too different to
use a same triplet for them. For example, an Android running on a
mobile phone is completely different from Ubuntu running on an ARM64
server, despite they are running on the same type of CPU (ARM64) and
using the same kernel (Linux).
Without an emulation layer, you cannot run an
executable for the server on the mobile phone or vice versa. So the
<quote>system</quote> field is separated into kernel and os fields to
designate these systems unambiguously. For our example, the Android
system is designated <literal>aarch64-unknown-linux-android</literal>,
and the Ubuntu system is designated
<literal>aarch64-unknown-linux-gnu</literal>. The word
<quote>triplet</quote> remained. A simple way to determine your
system triplet is to run the <command>config.guess</command>
script that comes with the source for many packages. Unpack the binutils
sources and run the script: <userinput>./config.guess</userinput> and note
the output. For example, for a 32-bit Intel processor the
output will be <emphasis>i686-pc-linux-gnu</emphasis>. On a 64-bit
system it will be <emphasis>x86_64-pc-linux-gnu</emphasis>.</para>
system it will be <emphasis>x86_64-pc-linux-gnu</emphasis>. On most
Linux systems the even simpler <command>gcc -dumpmachine</command> command
will give you similar information.</para>
<para>Also be aware of the name of the platform's dynamic linker, often
<para>You should also be aware of the name of the platform's dynamic linker, often
referred to as the dynamic loader (not to be confused with the standard
linker <command>ld</command> that is part of binutils). The dynamic linker
provided by Glibc finds and loads the shared libraries needed by a
provided by package glibc finds and loads the shared libraries needed by a
program, prepares the program to run, and then runs it. The name of the
dynamic linker for a 32-bit Intel machine is <filename
class="libraryfile">ld-linux.so.2</filename> and is <filename
class="libraryfile">ld-linux-x86-64.so.2</filename> for 64-bit systems. A
class="libraryfile">ld-linux.so.2</filename>; it's <filename
class="libraryfile">ld-linux-x86-64.so.2</filename> on 64-bit systems. A
sure-fire way to determine the name of the dynamic linker is to inspect a
random binary from the host system by running: <userinput>readelf -l
&lt;name of binary&gt; | grep interpreter</userinput> and noting the
output. The authoritative reference covering all platforms is in the
<filename>shlib-versions</filename> file in the root of the Glibc source
<filename>shlib-versions</filename> file in the root of the glibc source
tree.</para>
</note>
<para>In order to fake a cross compilation in LFS, the name of the host triplet
is slightly adjusted by changing the &quot;vendor&quot; field in the
<envar>LFS_TGT</envar> variable. We also use the
<envar>LFS_TGT</envar> variable so it says &quot;lfs&quot;. We also use the
<parameter>--with-sysroot</parameter> option when building the cross linker and
cross compiler to tell them where to find the needed host files. This
ensures that none of the other programs built in <xref
linkend="chapter-temporary-tools"/> can link to libraries on the build
machine. Only two stages are mandatory, and one more for tests:</para>
machine. Only two stages are mandatory, plus one more for tests.</para>
<informaltable align="center">
<tgroup cols="5">
@ -199,47 +216,63 @@
<tbody>
<row>
<entry>1</entry><entry>pc</entry><entry>pc</entry><entry>lfs</entry>
<entry>build cross-compiler cc1 using cc-pc on pc</entry>
<entry>Build cross-compiler cc1 using cc-pc on pc.</entry>
</row>
<row>
<entry>2</entry><entry>pc</entry><entry>lfs</entry><entry>lfs</entry>
<entry>build compiler cc-lfs using cc1 on pc</entry>
<entry>Build compiler cc-lfs using cc1 on pc.</entry>
</row>
<row>
<entry>3</entry><entry>lfs</entry><entry>lfs</entry><entry>lfs</entry>
<entry>rebuild and test cc-lfs using itself on lfs</entry>
<entry>Rebuild and test cc-lfs using cc-lfs on lfs.</entry>
</row>
</tbody>
</tgroup>
</informaltable>
<para>In the above table, <quote>on pc</quote> means the commands are run
<para>In the preceding table, <quote>on pc</quote> means the commands are run
on a machine using the already installed distribution. <quote>On
lfs</quote> means the commands are run in a chrooted environment.</para>
<para>Now, there is more about cross-compiling: the C language is not
just a compiler, but also defines a standard library. In this book, the
GNU C library, named glibc, is used. This library must
be compiled for the lfs machine, that is, using the cross compiler cc1.
GNU C library, named glibc, is used (there is an alternative, &quot;musl&quot;). This library must
be compiled for the LFS machine; that is, using the cross compiler cc1.
But the compiler itself uses an internal library implementing complex
instructions not available in the assembler instruction set. This
internal library is named libgcc, and must be linked to the glibc
subroutines for functions not available in the assembler instruction set. This
internal library is named libgcc, and it must be linked to the glibc
library to be fully functional! Furthermore, the standard library for
C++ (libstdc++) also needs being linked to glibc. The solution to this
chicken and egg problem is to first build a degraded cc1 based libgcc,
lacking some functionalities such as threads and exception handling, then
build glibc using this degraded compiler (glibc itself is not
degraded), then build libstdc++. But this last library will lack the
same functionalities as libgcc.</para>
C++ (libstdc++) must also be linked with glibc. The solution to this
chicken and egg problem is first to build a degraded cc1-based libgcc,
lacking some functionalities such as threads and exception handling, and then
to build glibc using this degraded compiler (glibc itself is not
degraded), and also to build libstdc++. This last library will lack some of the
functionality of libgcc.</para>
<para>This is not the end of the story: the conclusion of the preceding
<para>This is not the end of the story: the upshot of the preceding
paragraph is that cc1 is unable to build a fully functional libstdc++, but
this is the only compiler available for building the C/C++ libraries
during stage 2! Of course, the compiler built during stage 2, cc-lfs,
would be able to build those libraries, but (1) the build system of
GCC does not know that it is usable on pc, and (2) using it on pc
would be at risk of linking to the pc libraries, since cc-lfs is a native
compiler. So we have to build libstdc++ later, in chroot.</para>
gcc does not know that it is usable on pc, and (2) using it on pc
would create a risk of linking to the pc libraries, since cc-lfs is a native
compiler. So we have to re-build libstdc++ later as a part of
gcc stage 2.</para>
<para>In &ch-final; (or <quote>stage 3</quote>), all packages needed for
the LFS system are built. Even if a package is already installed into
the LFS system in a previous chapter, we still rebuild the package
unless we are completely sure it's unnecessary. The main reason for
rebuilding these packages is to settle them down: if we reinstall a LFS
package on a complete LFS system, the installed content of the package
should be same as the content of the same package installed in
&ch-final;. The temporary packages installed in &ch-tmp-cross; or
&ch-tmp-chroot; cannot satisify this expectation because some of them
are built without optional dependencies installed, and autoconf cannot
perform some feature checks in &ch-tmp-cross; because of cross
compilation, causing the temporary packages to lack optional features
or use suboptimal code routines. Additionally, a minor reason for
rebuilding the packages is allowing to run the testsuite.</para>
</sect2>
@ -252,10 +285,10 @@
be part of the final system.</para>
<para>Binutils is installed first because the <command>configure</command>
runs of both GCC and Glibc perform various feature tests on the assembler
runs of both gcc and glibc perform various feature tests on the assembler
and linker to determine which software features to enable or disable. This
is more important than one might first realize. An incorrectly configured
GCC or Glibc can result in a subtly broken toolchain, where the impact of
is more important than one might realize at first. An incorrectly configured
gcc or glibc can result in a subtly broken toolchain, where the impact of
such breakage might not show up until near the end of the build of an
entire distribution. A test suite failure will usually highlight this error
before too much additional work is performed.</para>
@ -274,14 +307,14 @@
<command>$LFS_TGT-gcc dummy.c -Wl,--verbose 2&gt;&amp;1 | grep succeeded</command>
will show all the files successfully opened during the linking.</para>
<para>The next package installed is GCC. An example of what can be
<para>The next package installed is gcc. An example of what can be
seen during its run of <command>configure</command> is:</para>
<screen><computeroutput>checking what assembler to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/as
checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld</computeroutput></screen>
<para>This is important for the reasons mentioned above. It also
demonstrates that GCC's configure script does not search the PATH
demonstrates that gcc's configure script does not search the PATH
directories to find which tools to use. However, during the actual
operation of <command>gcc</command> itself, the same search paths are not
necessarily used. To find out which standard linker <command>gcc</command>
@ -295,12 +328,12 @@ checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld</compute
order.</para>
<para>Next installed are sanitized Linux API headers. These allow the
standard C library (Glibc) to interface with features that the Linux
standard C library (glibc) to interface with features that the Linux
kernel will provide.</para>
<para>The next package installed is Glibc. The most important
considerations for building Glibc are the compiler, binary tools, and
kernel headers. The compiler is generally not an issue since Glibc will
<para>The next package installed is glibc. The most important
considerations for building glibc are the compiler, binary tools, and
kernel headers. The compiler is generally not an issue since glibc will
always use the compiler relating to the <parameter>--host</parameter>
parameter passed to its configure script; e.g. in our case, the compiler
will be <command>$LFS_TGT-gcc</command>. The binary tools and kernel
@ -313,30 +346,31 @@ checking what linker to use... /mnt/lfs/tools/i686-lfs-linux-gnu/bin/ld</compute
<envar>$LFS_TGT</envar> expanded) to control which binary tools are used
and the use of the <parameter>-nostdinc</parameter> and
<parameter>-isystem</parameter> flags to control the compiler's include
search path. These items highlight an important aspect of the Glibc
search path. These items highlight an important aspect of the glibc
package&mdash;it is very self-sufficient in terms of its build machinery
and generally does not rely on toolchain defaults.</para>
<para>As said above, the standard C++ library is compiled next, followed in
<xref linkend="chapter-temporary-tools"/> by all the programs that need
themselves to be built. The install step of all those packages uses the
<envar>DESTDIR</envar> variable to have the
programs land into the LFS filesystem.</para>
<para>As mentioned above, the standard C++ library is compiled next, followed in
<xref linkend="chapter-temporary-tools"/> by other programs that need
to be cross compiled for breaking circular dependencies at build time.
The install step of all those packages uses the
<envar>DESTDIR</envar> variable to force installation
in the LFS filesystem.</para>
<para>At the end of <xref linkend="chapter-temporary-tools"/> the native
lfs compiler is installed. First binutils-pass2 is built,
with the same <envar>DESTDIR</envar> install as the other programs,
then the second pass of GCC is constructed, omitting libstdc++
and other non-important libraries. Due to some weird logic in GCC's
LFS compiler is installed. First binutils-pass2 is built,
in the same <envar>DESTDIR</envar> directory as the other programs,
then the second pass of gcc is constructed, omitting some
non-critical libraries. Due to some weird logic in gcc's
configure script, <envar>CC_FOR_TARGET</envar> ends up as
<command>cc</command> when the host is the same as the target, but is
<command>cc</command> when the host is the same as the target, but
different from the build system. This is why
<parameter>CC_FOR_TARGET=$LFS_TGT-gcc</parameter> is put explicitly into
the configure options.</para>
<parameter>CC_FOR_TARGET=$LFS_TGT-gcc</parameter> is declared explicitly
as one of the configuration options.</para>
<para>Upon entering the chroot environment in <xref
linkend="chapter-chroot-temporary-tools"/>, the first task is to install
libstdc++. Then temporary installations of programs needed for the proper
linkend="chapter-chroot-temporary-tools"/>,
the temporary installations of programs needed for the proper
operation of the toolchain are performed. From this point onwards, the
core toolchain is self-contained and self-hosted. In
<xref linkend="chapter-building-system"/>, final versions of all the