Structure of the Cluster

This section tells you what each of our servers is and what software it needs to function. Configuration is only discussed in broad strokes, and you won’t find anything on the actual administration of these machines and services; such documentation can (ideally, at least) be found elsewhere in these notes.

Control Layer

Much of the cluster is controlled from three Dell 2950 nodes. All three machines are ceph monitors; ceph is therefore critical software on all three. They are otherwise (relatively) simple debian machines. All three machines need afs clients for various bits of configuration. Each machine handles a couple of other services:

Magellan

Magellan serves as the primary gateway from the cluster to the wider internet. It runs our firewall via shorewall. Magellan also hosts afs-file and afs-scratch, our virtual afs servers. These run through libvirt. You can generally ssh from magellan to any physical machine in the cluster. (Openstack) VMs are not guaranteed.

Magellan also has some nice aliases for working with ipmitool in localadmin’s .bashrc.

Gomes

Gomes’s main job is running Openstack. It runs nova, keystone, glance, and cinder; these are all parts of Openstack (and, thus, gomes-critical software). Gomes also does networking out of Openstack, so it needs shorewall as well.

Crimea

Crimea is our mailserver. It uses [stuff] to do this.

Storage Layer

Our storage machines need ceph for replication. They also need afs clients for common configuration. Further, their underlying filesystem is zfs, so they need those packages as well. Currently, our storage layer consists of four sunfires named (appropriately): sunfire0, sunfire1, sunfire2, and sunfire3.

Compute Nodes

Our three compute nodes are Antonio, Enrique, and Serrao. They need [Openstack software] to do their jobs. All three are Dell R900s.

Virtual Hosts

AFS Virtual Servers

AFS is controlled by two virtual machines on Magellan: these are afs-file and afs-scratch. Each does [stuff].

Openstack Service VMs

We have a large number of virtual machines of varying importance that live in Openstack under the admin tenant. Here’s a quick rundown:

  • Conch and cowrie are simple shell servers. They have most of the same software as our desktops, but without the graphical stuff. Conch sits outside the Hopkins firewall; cowrie is inside it.
  • Egg is another shell server, but it’s special. It’s configured to allow login via JHED for people who for whatever reason are using our systems but aren’t ACM members.
  • Einstein is our database VM. Most of our other vms store their databases on Einstein.
  • Ebola (thanks Rich) is our bugtracker. It runs request-tracker, a perl/mason utility. It’s got the RT “assets” plugin for inventory tracking, but we’ve (obviously) decided to move away from that system.
  • Git runs gitlab. Nothing much to see here.
  • Ci-runner-1 is our gitlab continuous integration worker machine. We should probably have a couple of public, general workers, but we should really also allow people to set up their own CI runners. This will involve looking into gitlab’s config.
  • Astrolabe
  • Belthazar
  • Quassel is our quasselcore. Ben Rosser has a pull request on quassel that would allow us to use LDAP as an authentication backend, which in turn would let us to give people a quassel account with their ACM account (by allowing LDAP authentication) but it hasn’t been merged into the master yet.
  • Sharpsphere
  • Web and web-v2 are supposed to be our webservers. They are somewhat screwed up at the moment.
  • Wintermute
  • Jupyter runs an ipython server.
  • Hermes runs not-acmbot, and records our IRC channel.
  • Minerva was intended to be a print server, but isn’t fully set up.
  • Aperture runs SETI@home and similar crowdsource-computing software.
  • Blocks is a minecraft server.
  • Cubes is a minetest server.
  • There was some thought that the ACM could set up a forum using phpbb or something similar. As with several other ACM projects, this went nowhere, but Gazebo was to be the host for it.
  • Knalga is a Wesnoth server.
  • Musicbrainz
  • Nymph
  • Threefort might be a TF2 server?

Chicago

Chicago handles our backups. It runs zfs-root (and thus needs those packages), and also does [afs server-side things].

Desktops

Our desktops are also shell servers; we encourage people to log in to them because they are generally somewhat faster than our virtual shell servers. These machines need afs clients, ssh servers, plus a whole load of graphical software. Eridani, arcturus, and fomalhaut are currently set up as actual two-monitor desktops. Sirius and polaris are currently running some of the north wall services; they are otherwise normal desktops. Vega, deneb, and procyon are all off; they are state unknown.

Beaglebones and RazPis

Typhon and Echidna

Janus

Various Others

Calliope

T2000s

Gacrux

Bigbrother

Bigbrother runs our monitoring software. It uses nagios and ganglia (at least…) to track our stats as well as the status of our various machines.