OpenAFS Partition Scheme

Partitions Stored in Ceph

RBD NAME SERVER PARTITION CONTENTS
rbdafs-home/home-1 afsfile vicepu user.* RW, mail.* RW
rbdafs-home/service-1 afsfile vicepa service.* RW
rbdafs-home/group-1 afsfile vicepg group.* RW
rbdafs-homescr/groupscr-0 afsscratch vicepg gscr.* RW
rbdafs-homescr/homescr-1 afsscratch viceps scr.* RW
rbdafs-homescr/genscratch-0 afsscratch vicept scratch RW
rbdafs-mirror/mirror-1 afsscratch vicepm mirror.* RW

Partitions on Chicago

While, properly speaking, volumes are released and archived and volumes live on partitions, we speak in this table of a partition’s schedule as the setup we have built treats all volumes on each of these partitions uniformly.

PARTITION CONTENTS RELEASE SCHEDULE ARCHIVE SCHEDULE SEE
vicepa user,group,service RO; chicago specific RW nightly by chicago
weekly by chicago
(RO only)
[1]
vicepb various special cases nightly by chicago manually / never  
vicepi mail RO volumes nightly by chicago
weekly by chicago
(RO only)
[#afspart_vicepi]
vicepm mirror.* RO mirrors automation manually / never  
viceps gscr,scr RO manually / never manually / never  
vicepz misc manually / never manually / never [3]
[1](1, 2) Long-term AFS Archives with bup and various scripts in chicago’s ~root/bin (backed up to file:///afs/acm.jhu.edu/group/admins.pub/scripts/chicago )
[2]See [1] for details of release and archive; mail.* volumes are hosted on Chicago, i.e. outside the cluster, so that our mail system can deliver even if the larger backing stores are offline, as there is typically some urgency associated with mail.
[3]Reserved for experiments

Worth noting, while here, is that /afs/acm.jhu.edu/readonly is deliberately designed as a shadow copy of the cell, containing entirely RO mounts. Our various volume-creation scripts mount things over here too, and if you’re doing something fancy you might consider playing along. This helps to ensure that even in the event of a catastrophic failure of the cluster and ACM services, data will remain online enough that someone can at least copy it elsewhere, without admin intervention or having to replay from backup media.

Release and Backup Schedules of Volumes

Volumes Schedule
group.*, service.*, user.* root.*

Expected to exist on Chicago /vicepa and will be picked up by its weekly scan script.

(But see below for special cases!)

gscr.* scr.* vos backupsys on afsscratch as bos cronjob
scratch vos backup on afsscratch as bos cronjob
mirror.* manages itself via mirrors automation (see below)

Note that admins.pub is a special case; see The Special Case of admins.pub.

The Special Case of Mirrors and Other Unprivileged Release Operations

Mirrors (/vicepm wherever it may be found) are on their own release schedule, as part of the mirrors update automation. However, since release is a privileged operation in AFS, we run remctld and afs-backend on one of our AFS servers. Users of this service are encouraged to use wrapsrv and the DNS SRV record(s) under the namespace _remctl.acm.jhu.edu, rather than pointing at a particular server.

afs-backend is configured to allow the host/mirrors-updater.vm.acm.jhu.edu (AKA rcmd.mirrors-upater) principal to release any volume whose name matches /^mirror\..*$/. For details, see the configuration on afsptvl.vm and the scripts in service/mirrors .

Members of system:non-admin-hats are authorized to release any volume. So if you’re cruising along inside the cluster and want to release a volume without your /admin hat on, run

remctl afsptvl.trinidad.acm.jhu.edu afs-backend release $VOLUMENAME

Other Special Cases

user.fumbeya.hd is a special-purpose volume for some medical imaging data that Fumbeya is hosting with us; it is located on a non-archival partition, but is nonetheless replicated to Chicago.