The MagAO-X software is designed for use on Linux with CentOS 7, and the included provisioning script will automatically set up a fresh install on a computer running that OS. However, most of us don’t use CentOS 7 on our personal computers.
If you’re on a Mac or Windows machine, or if you just want to keep MagAO-X isolated from the rest of your OS, you should use a virtual machine (VM). A virtual machine is a simulated computer (running whatever “guest OS” you like) that runs as a program on your computer’s OS (which we call the “host OS”). This virtual machine can then be used to operate all the normal MagAO-X GUI and CLI tools and control the real instrument.
Conceptually, you just create a virtual CentOS computer and go through the normal installation process on it. To automate this process, and make certain customizations for speed and convenience, there’s Vagrant. Vagrant can start a virtual machine from a pre-made image, run your install script, and configure things like forwarding network ports from the VM to your host OS.
As it happens, MagAO-X has a Vagrantfile specifying the setup process to minimize the number of manual steps.
git— Preinstalled on most Linuxes, install with
xcode-select --installon macOS, see below for Windows
VirtualBox — Preferred virtualization backend, available for free
Vagrant — Program to automate creation / provisioning of development VMs
NFS — available preinstalled on macOS and most Linux hosts (ICC guest on macOS and Linux hosts only)
Additional notes for Windows users¶
Windows use isn’t tested automatically, and things may break unexpectedly…
It’s probably easiest to get
gitfrom Anaconda if you’re already using it (use
conda install gitat the Anaconda command line)
gitneeds to be configured not to alter line endings. After installing git, you should do
git config --global core.autocrlf falsebefore cloning MagAOX. (However, if you use
gitfor other things, you may not want this to be a global setting.)
The existence of a
windows_host.txtadvisory file is required for provisioning to succeed. (Its presence tells the scripts to work around functionality that is missing on Windows hosts.)
The section below on Using GUIs in the VM needs to be expanded with instructions for Windows. (Basically, we need to figure out which of the X11 servers for Windows works with
vagrant sshin the current configuration.) Until then, no GUIs in Windows.
vagrantcommand is available:
$ vagrant --help Usage: vagrant [options] <command> [<args>] ...
Clone magao-x/MagAOX (if necessary) and
$ git clone https://github.com/magao-x/MagAOX.git Cloning into 'MagAOX'... ... $ cd MagAOX
Windows only: Create a new blank file named
windows_host.txtin the MagAOX folder.
$ vagrant up
If prompted, enter your password to configure NFS exports. (See this doc for information on eliminating that prompt.)
vagrant upstep is CPU and bandwidth intensive the first time, as it will download an OS image and all of the MagAO-X dependencies, then compile them. Subsequent
vagrant ups will just boot the existing machine.
Don’t be alarmed by the output from
vagrant up. Provisioning is very
noisy, and messages in red aren’t necessarily errors. Successful
provisioning will end with the message
What to do if you don’t see
Most likely that means an error occurred running the provisioning
scripts and they did not finish. That can happen if a big download gets
interrupted, for example. It’s always safe to run
and it’ll re-run only necessary steps, which may be enough to get you to
If that doesn’t resolve the issue, you’ll need the complete provisioning
output to get help. The following command will save it to a file
provision.log, which you can then email or Slack to someone who can
$ vagrant provision | tee provision.log
To connect to the VM, use
vagrant ssh. You’ll be logged in as user
vagrant with no password, and the command prompt in your shell will
change to something like this:
The rest of the commands in this section are to be run in a
vagrant ssh session, unless otherwise noted.
Remotely controlling MagAO-X¶
Before you can remotely control MagAO-X, a little post-provisioning
configuration is required. You must have a user account on MagAO-X with
an SSH key file configured. (This will probably be called something like
~/.ssh/id_ecdsa on your host computer, with the corresponding file
~/.ssh/id_ecdsa.pub added to your authorized keys on the MagAO-X
With the username and key file handy, go to the folder where you cloned
MagAOX repository. There will be a subfolder called
where the provisioning process placed a lot of files. In
config file. At the end you will see
Host * User YOURUSERNAME
which you should update with the username you use on MagAO-X computers.
Notice the line at the top that says
IdentityFile /vagrant/vm/ssh/magaox_ssh_key. This tells the VM to
use the key file at
vm/ssh/magaox_ssh_key from the host to
authenticate you. Copy the key file you identified before and rename it
magaox_ssh_key and store it in the same directory as
To ensure everything’s configured correctly, from a
ssh rtc, then
[vagrant@centos7] $ ssh rtc [you@exao2] $ exit [vagrant@centos7] $
xctrl script is installed during provisioning, and a default set
of apps is configured to run on
xctrl startup. These apps launch SSH
tunnels to the instrument.
The proclist for VM usage is in magao-x/config/proclist_vm.txt.
xctrl startup to start the tunnels should result in output
[vagrant@centos7 ~]$ xctrl startup Session vm_aoc_milkzmq does not exist Session vm_aoc_indi does not exist Created tmux session for vm_aoc_milkzmq Created tmux session for vm_aoc_indi Executed in vm_aoc_milkzmq session: '/opt/MagAOX/bin/sshDigger -n vm_aoc_milkzmq' Executed in vm_aoc_indi session: '/opt/MagAOX/bin/sshDigger -n vm_aoc_indi'
And you can check their status with
xctrl status or
[vagrant@centos7 ~]$ xctrl status vm_aoc_indi: running (pid: 6147) vm_aoc_milkzmq: running (pid: 6148)
Using GUIs in the VM¶
The VM is configured to be “headless”, meaning there’s no graphical display window. However, we can still build and run MagAO-X GUIs as long as your host OS has an X11 server (most Linux systems do by default, but you will need XQuartz on macOS).
If you’re unfamiliar with SSH X forwarding, the short version is that
the app runs on the VM but the window pops up like any other window on
your own computer (the host). SSH (i.e.
vagrant ssh) is the
transport that moves information about the window back and forth to the
GUI app, which is still running inside the VM.
+------------------------------------------+ | +----------------------+| | Host OS | VM || | | || | [GUI window] <-SSH-> [MagAO-X GUI app] || | +----------------------+| +------------------------------------------+
So, to start the
coronAlignGUI, you could do…
host$ vagrant ssh vm$ coronAlignGUI
…and the coronagraph alignment GUI will come up like any other window on your host machine.
Be careful! Anything you do with these GUIs controls the real instrument (which is sort of the point, but it bears reiterating).
Viewing camera outputs¶
The realtime image viewer
rtimv is built during provisioning. To get
up-to-date imagery from the instrument, we can use
jaredmales/milkzmq, a set of
programs that relay shared memory image buffers from one computer to
The AOC workstation runs a
mzmqServer process that re-serves the
images it replicates from the rest of the instrument using compression
and a limit of 1 FPS. This ensures it doesn’t overwhelm your home
(Napkin math: 1024 * 1024 * 16 bit, or one
camsci1 frame, is ~2
MB. 2 MByte / second is 16 Mbit / second, more than compressed HD video
streams. And that’s just one camera!)
The list of images re-served by AOC is kept in
/opt/MagAOX/config/mzmqServerAOC.conf (view on
After confirming the tunnel
vm_aoc_milkzmq is running
xctrl status), start a
milkzmqClient. For this example we’ll
milkzmqClient -p 9000 localhost camwfs camwfs_dark &
& at the end of the command to background the client, so
just hit enter again to get a normal prompt back after its startup
The configuration in
files named for the various cameras (see the
shmim_name options in
those files for hints about which images to replicate for a given
Start the viewer with
rtimv -c rtimv_camwfs.conf
and it should pop up a window like this:
For instructions on rtimv, consult its user guide.