TL;DR I needed to run Buildroot. I was using my M1 MacBook. Buildroot’s instructions link to a Vagrantfile. I could not get it working out of the box (!) so I ended up creating a new Vagrant Base Box with VMWare Fusion based on Ubuntu Server 22.04.3 LTS for arm64. It worked out pretty good.
################################################################################
#
# Vagrantfile
#
################################################################################
# Buildroot version to use
RELEASE='2023.11.1'
### Change here for more memory/cores ###
VM_MEMORY=2048
VM_CORES=1
Vagrant.configure('2') do |config|
config.vm.box = 'ubuntu/bionic64'
config.vm.provider :vmware_fusion do |v, override|
v.vmx['memsize'] = VM_MEMORY
v.vmx['numvcpus'] = VM_CORES
end
config.vm.provider :virtualbox do |v, override|
v.memory = VM_MEMORY
v.cpus = VM_CORES
required_plugins = %w( vagrant-vbguest )
required_plugins.each do |plugin|
system "vagrant plugin install #{plugin}" unless Vagrant.has_plugin? plugin
end
end
config.vm.provision 'shell' do |s|
s.inline = 'echo Setting up machine name'
config.vm.provider :vmware_fusion do |v, override|
v.vmx['displayname'] = "Buildroot #{RELEASE}"
end
config.vm.provider :virtualbox do |v, override|
v.name = "Buildroot #{RELEASE}"
end
end
config.vm.provision 'shell', privileged: true, inline:
"sed -i 's|deb http://us.archive.ubuntu.com/ubuntu/|deb mirror://mirrors.ubuntu.com/mirrors.txt|g' /etc/apt/sources.list
dpkg --add-architecture i386
apt-get -q update
apt-get purge -q -y snapd lxcfs lxd ubuntu-core-launcher snap-confine
apt-get -q -y install build-essential libncurses5-dev \
git bzr cvs mercurial subversion libc6:i386 unzip bc
apt-get -q -y autoremove
apt-get -q -y clean
update-locale LC_ALL=C"
config.vm.provision 'shell', privileged: false, inline:
"echo 'Downloading and extracting buildroot #{RELEASE}'
wget -q -c http://buildroot.org/downloads/buildroot-#{RELEASE}.tar.gz
tar axf buildroot-#{RELEASE}.tar.gz"
end
After you have stored the Vagrantfile in some directory, the instructions tell you to run
$ vagrant up
which started automatically installing a plugin called
vagrant-vbguest
since Vagrant defaults to VirtualBox. I
wanted to use Fusion so I tried
$ vagrant up --provider=vmware_fusion
and it told me that
The provider 'vmware_fusion' could not be found, but was requested to
back the machine 'default'. Please use a provider that exists.
Vagrant knows about the following providers: docker, hyperv, virtualbox
I still missed something. I was told to run
$ vagrant plugin install vagrant-vmware-desktop
which worked nicely:
Installing the 'vagrant-vmware-desktop' plugin. This can take a few minutes...
Fetching vagrant-vmware-desktop-3.0.3.gem
Installed the plugin 'vagrant-vmware-desktop (3.0.3)'!
Now we have all these things so let’s get back to running
vagrant up
1:
$ vagrant up
Bringing machine 'default' up with 'vmware_fusion' provider...
==> default: Box 'ubuntu/bionic64' could not be found. Attempting to find and install...
default: Box Provider: vmware_desktop, vmware_fusion, vmware_workstation
default: Box Version: >= 0
==> default: Loading metadata for box 'ubuntu/bionic64'
default: URL: https://vagrantcloud.com/api/v2/vagrant/ubuntu/bionic64
The box you're attempting to add doesn't support the provider
you requested. Please find an alternate box or use an alternate
provider. Double-check your requested provider to verify you didn't
simply misspell it.
If you're adding a box from HashiCorp's Vagrant Cloud, make sure the box is
released.
Name: ubuntu/bionic64
Address: https://vagrantcloud.com/api/v2/vagrant/ubuntu/bionic64
Requested provider: vmware_desktop vmware_fusion vmware_workstation (arm64)
Alright, so “The box you’re attempting to add doesn’t support the provider you requested.” Naturally I played around for a while before I actually investigated the issue, but it turns out that we can ask the Vagrant Cloud API for details about the box:
$ curl -s \
https://app.vagrantup.com/api/v2/box/ubuntu/bionic64 \
| jq -r '.versions[].providers[].name' \
| sort -u
virtualbox
Same applies to boxes of Ubuntu 22.04 LTS (Jammy Jellyfish), the
latest LTS, found with ubuntu/jammy64
. All values for
architecture
are unknown
but I’m guessing
that’s effectively amd64
. Oh well.
At the time of writing, Ubuntu provides 64-bit ARM release builds for the Server version of Jammy Jellyfish. Also, Vagrant Cloud offers many user-uploaded boxes matching our interests, but still…
How hard can it be to build the box ourselves?
I had no idea how to build a Vagrant box. I searched a bit and was inspired by a temptingly short guide by the user wildfluss at GitHub. Looks pretty easy! Hashicorp has more detailed instructions for building a base box.
I needed the Ubuntu Server install image, ubuntu-22.04.3-live-server-arm64.iso. I then created a new VM with the non-default drive size of 30 GB – you may want more – and unchecked “Split into multiple files” from “Advanced options”2. The default was 4 GB of RAM, 2 cores, NAT networking, and I did not alter them. I removed all the unnecessary devices3, and booted up the ISO.
In this case, the installation process is mostly about accepting whatever default choices the installer provides. I picked “Ubuntu Server” as the type of install as the “minimized” version lacks a bunch of packages which Buildroot expects4. And of course I had to redo the manual installation so many times that I almost decided to start automating it, too, but I had to contain this.
Vagrant seems to prefer logging in as the user “vagrant” so I picked that as my username. Vagrant also prefers to do the first login with SSH using known private keys which it switches to more secure ones during initial setup. I also picked a memorable password since I still have to log in and customize the instance – or maybe do some troubleshooting.
I picked “Install OpenSSH server” so sshd
will be
installed and automatically enabled. The Ubuntu installer can nowadays
provision the machine with someone’s GitHub or Launchpad keys, but I
wanted to provide Vagrant’s
default keys so I needed to add them later.
If this were a VM which might end up as a network-exposed host, it would most likely be correct to disable password authentication with SSH, especially now as I picked simplistic passwords for this purpose. If during the install you choose to enable the OpenSSH server and do not pick a public key to install, then password authentication will be force-enabled5.
After a few more default selections we are done and we can reboot into a fresh install. After the machine boots up, you have the OpenSSH daemon enabled and started, so you may log in via console or SSH. My VM is running behind NAT and I’m not going to bother setting up a local port-forward6 so I just used the console for the customizations below. As the “vagrant” user, I set the “known insecure” Vagrant keys as being permitted for SSH login, enabled passwordless sudo, and finally powered down the computer:
$ wget \
-O ~/.ssh/authorized_keys \
https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub
$ sudo sh -c 'echo "vagrant ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/vagrant'
$ sudo -K; sudo id # verify that Vagrant can now sudo without password
$ systemctl poweroff
Since this is a VMWare VM, open-vm-tools
may be useful,
but it looks like the Ubuntu installer automatically installs those. At
this point I also removed the DVD drive because I was uncertain how it
would behave with Vagrant. Now that we have our VM instance configured
as a potential base image, we can try turning it into a box file.
A Vagrant .box file is a tarball (tar, tar.gz, zip) that contains all the information for a provider to launch a Vagrant machine.
The separate chapter for VMWare-based
boxes contains more details such as the part about optimizing
box sizes. In my Fusion install, the command seems to be called
vmware-vdiskmanager
instead of
vmware-diskmanager
as in the docs. You will probably find
it under
/Applications/VMware Fusion.app/Contents/Library/
among the other CLI commands. Your VM contents will by default be in
~/Virtual Machines.localized/<vm-name>.vmwarevm
The contents of this directory will go into the box file. The VM directory may also contain logs and additionally lockfiles if you have not shut down the VM hard enough. I don’t know if they make a difference for Vagrant but I tried to avoid including things like that.
As always, be careful especially if you decide to share or upload your boxes to avoid sharing private keys or other secrets as the VM’s disk will contain home directories with shell histories, configuration, SSH keys etc. Also, I did not investigate what exactly Fusion includes in the VM directory so I suggest looking into it before sharing.
With the fresh install, I started with a 6.0 GB
Virtual Disk.vmdk
and ended up with 5.9 GB after running
the following:
$ vmware-vdiskmanager -d 'Virtual Disk.vmdk' # defragment
$ vmware-vdiskmanager -k 'Virtual Disk.vmdk' # shrink
Now, the
documentation says that the only mandatory thing in addition to the
VM contents is metadata.json
which
Contains a map with information about the box. Most importantly the target provider.
I noticed that Vagrant has several vmware_*
providers
but seems like nowadays just vmware_desktop
is
preferred. So I created
$HOME/Virtual machines.localized/ubuntu-server-22.04.3.vmwarevm/metadata.json
with the following contents:
{
"provider": "vmware_desktop",
"architecture": "arm64"
}
Where provider
is the only mandatory value. The
instructions said that the box is a tarball, so let us tar and gzip
it:
$ tar \
-zcvf ~/ubuntu-server-22.04.3.tgz \
-C "$HOME/Virtual Machines.localized/ubuntu-server-22.04.3.vmwarevm" \
.
which in my case resulted in a 2.3 GB file. Now we can try importing it with Vagrant:
$ vagrant box add --name ubuntu-server-22.03.4 ~/ubuntu-server-22.04.3.tgz
[..]
==> box: Successfully added box 'ubuntu-server-22.03.4' (v0) for ''!
$ vagrant box list
ubuntu-server-22.03.4 (vmware_desktop, 0)
We’re done with the box now.
Again, we’ll try changing the box
option to the one we
just created:
--- Vagrantfile.orig
+++ Vagrantfile
@@ -12,7 +12,7 @@
VM_CORES=1
Vagrant.configure('2') do |config|
- config.vm.box = 'ubuntu/bionic64'
+ config.vm.box = 'ubuntu-server-22.03.4'
config.vm.provider :vmware_fusion do |v, override|
v.vmx['memsize'] = VM_MEMORY
and see what happens7:
$ vagrant up
[..]
default: E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/jammy/main/binary-i386/Packages 404 Not Found [IP: 185.125.190.36 80]
default: E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/jammy-updates/main/binary-i386/Packages 404 Not Found [IP: 185.125.190.36 80]
default: E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/jammy-backports/main/binary-i386/Packages 404 Not Found [IP: 185.125.190.36 80]
default: E: Failed to fetch http://ports.ubuntu.com/ubuntu-ports/dists/jammy-security/main/binary-i386/Packages 404 Not Found [IP: 185.125.190.36 80]
default: E: Some index files failed to download. They have been ignored, or old ones used instead.
[..]
Notice the i386
there. Ubuntu has been going
back and forth with dropping architecture support for
i386
. I took the easy way out and removed anything
referring i386
. While we’re here, let’s also ask for two
cores and double the memory8.
--- Vagrantfile.orig
+++ Vagrantfile
@@ -8,11 +8,11 @@
RELEASE='2023.11.1'
### Change here for more memory/cores ###
-VM_MEMORY=2048
-VM_CORES=1
+VM_MEMORY=4096
+VM_CORES=2
Vagrant.configure('2') do |config|
- config.vm.box = 'ubuntu/bionic64'
+ config.vm.box = 'ubuntu-server-22.03.4'
config.vm.provider :vmware_fusion do |v, override|
v.vmx['memsize'] = VM_MEMORY
@@ -43,11 +43,10 @@
config.vm.provision 'shell', privileged: true, inline:
"sed -i 's|deb http://us.archive.ubuntu.com/ubuntu/|deb mirror://mirrors.ubuntu.com/mirrors.txt|g' /etc/apt/sources.list
- dpkg --add-architecture i386
apt-get -q update
apt-get purge -q -y snapd lxcfs lxd ubuntu-core-launcher snap-confine
apt-get -q -y install build-essential libncurses5-dev \
- git bzr cvs mercurial subversion libc6:i386 unzip bc
+ git bzr cvs mercurial subversion unzip bc
apt-get -q -y autoremove
apt-get -q -y clean
update-locale LC_ALL=C"
Now, let’s destroy and begin anew:
$ vagrant destroy -f
==> default: Stopping the VMware VM...
==> default: Deleting the VM...
$ vagrant up
Bringing machine 'default' up with 'vmware_fusion' provider...
==> default: Cloning VMware VM: 'ubuntu-server-22.03.4'. This can take some time...
==> default: Verifying vmnet devices are healthy...
==> default: Preparing network adapters...
WARNING: The VMX file for this box contains a setting that is automatically overwritten by Vagrant
WARNING: when started. Vagrant will stop overwriting this setting in an upcoming release which may
WARNING: prevent proper networking setup. Below is the detected VMX setting:
WARNING:
WARNING: ethernet0.pcislotnumber = "160"
WARNING:
WARNING: If networking fails to properly configure, it may require this VMX setting. It can be manually
WARNING: applied via the Vagrantfile:
WARNING:
WARNING: Vagrant.configure(2) do |config|
WARNING: config.vm.provider :vmware_desktop do |vmware|
WARNING: vmware.vmx["ethernet0.pcislotnumber"] = "160"
WARNING: end
WARNING: end
WARNING:
WARNING: For more information: https://www.vagrantup.com/docs/vmware/boxes.html#vmx-allowlisting
==> default: Starting the VMware VM...
==> default: Waiting for the VM to receive an address...
My vagrant up
runs tended to get stuck here. First I
thought it was about the huge WARNING
message above, but
following those instructions didn’t help. I found some recent discussion
about similar issues which seems to hint at something going wrong
with the virtual interface initialization. In my case, I was able to
work around by having the VMWare Fusion app always running when I wanted
to do vagrant up
. Did not troubleshoot this further.
At seemingly random times, I got other failures such as this one:
$ vagrant up
[..]
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest...
default: Removing insecure key from the guest if it's present...
default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!
==> default: Configuring network adapters within the VM...
The SSH connection was unexpectedly closed by the remote end. This
usually indicates that SSH within the guest machine was unable to
properly start up. Please boot the VM in GUI mode to check whether
it is booting properly.
which I worked around by running vagrant destroy -f
followed by vagrant up
a few times…
Anyhow, eventually things worked out:
$ vagrant up
[..]
==> default: Running provisioner: shell...
default: Running: inline script
default: Downloading and extracting buildroot 2023.11.1
$ vagrant ssh
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-92-generic aarch64)
[..]
vagrant@ubuntu:~$ id
uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),110(lxd)
vagrant@ubuntu:~$ ls
buildroot-2023.11.1 buildroot-2023.11.1.tar.gz
Looks good!
I picked x86_64
as my target architecture with
make menuconfig
and left everything else on defaults. And
finally:
$ time make
[..]
rootdir=/home/vagrant/buildroot-2023.11.1/output/build/buildroot-fs/tar/target
table='/home/vagrant/buildroot-2023.11.1/output/build/buildroot-fs/full_devices_table.txt'
ln -snf /home/vagrant/buildroot-2023.11.1/output/host/x86_64-buildroot-linux-gnu/sysroot /home/vagrant/buildroot-2023.11.1/output/staging
real 21m1.929s
user 32m16.937s
sys 4m27.853s
$ du -h output/images/rootfs.tar
5.1M output/images/rootfs.tar
At this point, the 14 GB root partition had something like 600 MB of free space left so I suggest reserving a bit more. In any case, this exercise is now successfully done.
jq
syntaxAfter installing the VMWare provider,
vagrant
seemed to use it as a the default without
explicitly specifying.↩︎
20 GB with default partioning gave me 10 GB of root,
which ran out during some Buildroot make
runs. 30 provides
around 14, which was just enough to do a default make
for
x86_64
. This is clearly a very inefficient way to get more
space for the home directory so of course one may also customize the
partitioning scheme, but I wanted to keep this as simple as possible.
Might be wiser to make sure you have some extra instead of having to go
back and rebuild the same box with a larger disk. So why not multiple
drive files? I saw a suggestion to run some size optimization commands
for the VM drive. I was unsure how that would behave with multiple files
so I decided to play it safe.↩︎
Camera, Sound Card and shared Bluetooth devices are not needed for Buildroot. The Ubuntu keyboard input relies on USB Controller, so that you’ll likely need.↩︎
Yes, I tried this.↩︎
The file is at
/etc/ssh/sshd_config.d/50-cloud-init.conf. I did not bother verifying
but I assume that if you leave the OpenSSH daemon disabled during the
installer and do something like systemctl enable --now ssh
,
then you would have to manually enable password login.↩︎
I got some weird erroring when trying to setup
port-forwarding with vmnet-cfgcli
which was a side quest I
did not want. Vagrant’s official VMWare Provider seems
to manipulate the VM’s configuration file directly. Another alternative
is to set up bridged networking, which might also directly expose it to
other users connecting from outside your computer.↩︎
I occasionally noted some error messages when running
vagrant up
about vmnet
instances. A rerun
fixed these so I assumed something was timing out a bit too eagerly.↩︎
Buildroot does not parallelize by default but I have
cores. I saw some OOM kills with 2 GB and x86_64
so bumping
up the memory is a good idea.↩︎