These are sections I removed from the main page because they didn't flow the way I wanted. So they are now reference sections here.

Access logs

Op uses syslog(3) to log every access attempt to LOG_AUTH and it always uses the name "op" (even when called by another name).

I direct this to the system console, not a local file or network service. Then I pick it up on the serial port and log it through my console server (see conserver.com for a more widely supported version than mine). This prevents Bad Guys from covering their tracks by network attack or by deletion of the log files (since they are not on a host that is on the same network as the host they are attacking).

I then produce reports that show who is assuming which role and how often they do it. This helps close-the-loop on broken applications, abuse of access, and some other political issues.

While this is pretty easy to setup, it is beyond the scope of this document.

Return to the main page.

Configuration file details

Op's configuration file syntax is designed to be overly simple. By keeping the syntax super simple we are trying to limit the chance of complex expressions leading to breaches in security policy. We want the op "firewall" to keep Bad Guys from gaining access they shouldn't have, while allowing Good Customers to work without useless constraints.

I looked at the usage of many other similar programs (super, sudo, pfexec, other versions of op) to see what features they provided that this version of op lacked. After that review I added the in-line script feature, which I still believe is a mistake.

Lexical conventions

Comments are the common octothorp (hash, #) to end-of-line, and are removed like most UNIX ™ configuration files. The octothorp is not special when contained in a word.

A word is a sequence of non-white-space characters. A mnemonic is limited in that it must start in column 1, and that, in addition to white-space, it is also terminated by either a semicolon (;) or an ampersand (&). All words are broken at white-space: there is no quote convention to protect white-space inside a word. There is one other lexical construct: an in-line script.

An in-line script is only parsed as the next item immediately after a mnemonic. It groups shell code between curly braces to form a block of shell code. It begins with an open curly ({) as a word and ends at the next line which starts with a close curly (}) as the first non-white-space character. The delimiting curly braces are not included in the resulting text.

The technical syntax

As a BNF:

file ::= rule * ;
rule ::= DEFAULT option *
	| mnemonic command arg * (; | &)  option * ;
option ::= word ;
arg ::= word ;
command ::= word
	| '{' ... '\n' white-space * '}'
	;

Meaning of these terms

The configuration file reader doesn't assign much meaning to the arg or option words as they are read. The semantic analysis is deferred until after all the files are input, up until that time all arg, option, and command terms are just words.

The special DEFAULT form omits the command part from the rule stanza: this is because it only expresses default options for subsequent rules (so it doesn't need any args).

The semicolon (;) that terminates the arg list may be expressed as an ampersand (&) to force a daemon option into the option list. This shorthand is clearer in the rule-base when it is repeated for many commands.

In the normal command mode op examines each rule in turn looking for a literal string match for the requested mnemonic against all those listed in the rule-base. Only if one matches are any of the options examined. At that point only the $# and $N options are analyzed.

Only when the command requested has the correct number of parameters and matches the command-line arguments are any DEFAULT options merged into the rule. All subsequent authorization and command construction only looks at that single rule. For all intents the rest of the rule-base is forgotten.

When a sanity check (under -S), list (under -l) or other report (under -r, or -w) request is processed each rule has its DEFAULT merged before it is processed. This could produce insane results as and default $# would be applied to liberally: use -S to catch this (and many other botches).

Return to the main page.

A primitive display tool

Here is a shell script that (under Linux) outputs lots of useful stuff about a processes environment and attributes, I called it "showme.sh":
#!/bin/sh
# $Id: showme.sh,v 2.29 2008/12/31 20:51:12 ksb Exp $
# $Doc: sed -e 's/&/A''MPamp;/g' -e 's/</\\&lt;/g' -e 's/>/\\&gt;/g' <%f| sed -e 's/[A][M][P]/\\&/g'
(
if [ $# -eq 0 ] ; then
	echo "Process #$$ was executed as \"$0\", with no arguments"
else
	echo "Process #$$ was executed as \"$0\", with arguments of"
	for A
	do
		echo "$A" |sed -e 's/\([\\"$`]\)/\\\1/g' -e 's/^/  "/;s/$/"/'
	done
fi
echo " as login" `id -rnu`", effective" `id -nu`" [uid" \
	`id -ru`", "`id -u`"]"
echo " with group" `id -rng`", effective" `id -ng`" [gid" \
	`id -rg`", "`id -g`"]"
echo " supplementary groups:" `id -nG` "[`id -G`]"
echo ' $PWD='`pwd`
echo ' umask '`umask`
renice 0 $$ 2>&1 |sed -e 's/old/nice/' -e 's/,.*//'
env
) |${PAGER:-more}
exit 0

Return to the main page.

Example jacket code

Helmet processes take their "check parameters" from forced environment variables then output an exit status to op to deny or allow an escalation. Jacket processes take much the same arguments, but may persist while the escalated access is running. The template code in this section may be adapted to serve as either a jacker or a helmet.

The example jacket parses the command-line options, offers -h, and -V as all good user interfaces should, and has comments in spots where you should edit to make your checks. Copy the jacket.pl file to a working file, edit it and search for each check-point to add your checks.

/CHECKS AND REPARATIONS
This is where a helmet should check the access requested against site policy to either accept (exit 0) or reject (exit non-zero) the requested access.
A jacket process may make the same checks, but should report a failure on stdout, before any exit (as all the check above did).
/CAPTURE START DATA
A helmet has already exit'd.
A jacket process may record a start-time, or the size of a log file, or any other relevant data it will need in the next step.
/CLEANUP
The process requested has finished, $status holds the exit code, while $? holds the raw wait status. Log anything you need to log, cleanup anything as needed.

If you need to exit non-zero because the access failed this would be the place.

At the end of the file.
Check your code out well, and change my revision control tag ($Id: ...) to your local flavor. Feel free to leave a credit in for the template, if you like.

Check it into your local revision control system and install it as local policy demands.

Here is the code, since some browsers don't like to show the perl script:
#!/usr/bin/perl -T
# An example perl jacket/helmet script (parses the options for you).	(ksb)
# Note that this code will most be run under the taint rules, see perlsec(1).
# KS Braunsdorf, at the NPCGuild.org
# $Doc: sed -e 's/&/A''MPamp;/g' -e 's/</\\&lt;/g' -e 's/>/\\&gt;/g' <%f| sed -e 's/[A][M][P]/\\&/g'

use lib  '/usr/local/lib/sac/perl'.join('.', unpack('c*', $^V)),
	'/usr/local/lib/sac';
use Getopt::Std;
use strict;

my($hisPath) = $ENV{'PATH'};
$ENV{'PATH'} = '/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin';
my($progname, %opts, $usage);
$progname = $0;
$progname =~ s/.*\///;
getopts("VhP:u:g:f:R:C:", \%opts);
$usage = "$progname: usage [-P pid] [-u user] [-g group] [-f file] [-R root] -C config -- mnemonic program euid:egid cred_type:cred";

if ($opts{'V'}) {
	print "$progname: ", '$Id: refs.html,v 2.68 2010/07/28 22:46:39 ksb Exp $', "\n";
	exit 0;
}
if ($opts{'h'}) {
	print "$usage\n",
		"C config   which op configuration file sourced the rule\n",
		"f file     the file specification given to op, as an absolute path\n",
		"g group    the group specification given to op\n",
		"h          standard help output\n",
		"P pid      the process-id of the jacketed process (only as a jacket)\n",
		"R root     the directory we chrooted under\n",
		"u user     the user specification given to op\n",
		"V          standard version output\n",
		"mnemonic   the requested mnemonic\n",
		"program    the program mapped from the mnemonic\n",
		"euid:egid  the computed effective uid and gid\n",
		"cred_type  the credential type that granted access (groups, users, or netgroups)\n",
		"cred       the matching group, login, or netgroup\n";
	exit 0;
}

my($MNEMONIC, $PROGRAM);
shift @ARGV if ('--' eq $ARGV[0]);
if (scalar(@ARGV) != 4) {
	print STDERR "$progname: exactly 4 positional parameters required\n";
	print "64\n" if $opts{'P'};
	exit 64;
}
if ($ARGV[0] !~ m|^([-/\@\w.]+)$|o) {
	print STDERR "$progname: mnemonic is zero width, or spelled badly\n";
	print "64\n" if $opts{'P'};
	exit 64;
}
$MNEMONIC = $1;
if ($ARGV[1] !~ m|^([-/\@\w.]+)$|o) {
	print STDERR "$progname: program specification looks bogus\n";
	print "64\n" if $opts{'P'};
	exit 64;
}
$PROGRAM = $1;
if ($ARGV[2] !~ m/^([^:]*):([^:]*)$/o) {
	print STDERR "$progname: euid:egid $ARGV[2] missing colon\n";
	print "65\n" if $opts{'P'};
	exit 65;
}
my($EUID, $EGID) = ($1, $2);
if ($ARGV[3] !~ m/^([^:]*):([^:]*)$/o) {
	print STDERR "$progname: cred_type:cred $ARGV[3] missing colon\n";
	print "76\n" if $opts{'P'};
	exit 76;
}
my($CRED_TYPE, $CRED) = ($1, $2);

# Now $MNEMONIC is mnemonic, $PROGRAM is program, also $EUID, $EGID,
# $CRED_TYPE, $CRED are set -- so make your checks now.
#
# There are 5 actions you can take, and leading white-space is ignored:
# 1) As above you can output an exit code to the process:
#	print "120\n";
# 2) You can set an environment variable [be sure to backslash the dollar]:
#	print "\$FOO=bar\n"
#    The same line without a value adds the client's $FOO (as presented):
#	print "\$FOO\n";
# 3) You can remove any environment variable:
#	print "-FOO\n";
# 4) You can send a comment which op will output only if -DDEBUG was set
#    when op was built [to help you, Mrs. Admin]:
#	print "# debug comment\n";
# 5) Use op to signal your displeasure with words, making op prefix your
#    comment with "op: jacket: " ("op: helmet: "):
#	print "Permission lost!\n";
#    (This suggests an exit code of EX_PROTOCOL.)
#
# Put your checks and payload here.  Output any commands to the co-process,
# be sure to send a non-zero exit code if you want to stop the access!
# CHECKS AND REPARATIONS
#e.g. check LDAP, kerberos, RADIUS, or time-of-day limits here.

# If we are a helmet you can just exit, if you exit non-zero op will view that
# as a failure to complete the access check, so it won't allow the access.
exit 0 unless $opts{'P'};

# We must be a jacket, and the requested access is not yet running.
# You could set a timer here, or capture the start/stop times etc.
# CAPTURE START DATA
#e.g. call time or set an interval timer
#e.g. block signals

# Let the new process continue by closing stdout, if the last exitcode
# you wrote to stdout was non-zero op won't run the command, I promise.
open STDOUT, ">/dev/null";

# We can wait for the process to exit, we are in perl because the shell
# (ksh,sh) can't get the exit code from a process it didn't start.
my($kid, $status);
$kid = waitpid $opts{'P'}, 0;
$status = $? >> 8;

# Do any cleanup you want to do here, after that the jacket's task is complete.
# Mail a report, syslog something special, restart a process you stopped
# before the rule ran, what ever you need.  On a failure you should exit
# with a non-zero code here.
# CLEANUP
#e.g.: print STDERR "$kid exited with $status\n";
#e.g.: use Sys::Syslog; ... log something clever

# This is the exit code that goes back to the client, since this jacket
# became the op process (as all jackets do).
exit 0;

We could drop uid to a vanilla login (like nobody) as soon as we don't need special permissions. That would be a good idea, if you can manage it. There is a fine line here, you don't really want to drop to the original login's uid, because then they can mess with your process and the point of a jacket is that the client can't ptrace(2) you.

This is also a sane way to get a PAM session close, as in the normal op process flow we execve the escalated program, so op not around to call pam_close_session(3).

Return to the main page.

Tom's paper reference

%A Tom Christiansen
%T Op: A Flexible Tool for Restricted Superuser Access
%P 89-94
%I USENIX
%B Large Installation Systems Administration III Workshop Proceedings
%D September 7-8, 1989
%C Austin, TX
%W Convex Computer Corporation

In-line scripts are a botch

There was a lot of demand for the in-line script feature. I don't use in-line scripts. I have the master source system to push only the scripts I need to only the hosts that need them.

For the reasons below I try not to put in-line scripts in any access rule-base.

It radiates information
If you call a script from a protected directory (viz. /usr/local/libexec/op) then the Bad Guy can't see the text of the script to aid her in subornation of the code. If you put the code in-line it shows up in ps output while running.
Revision control of the script is bound to the rule-base
I use very strict revision control for all my local tools. Each program outputs a version string under -V and each non-program file holds a revision tag in comments at the top of the files. By putting code without the -V hook in the configuration file I am overloading the revision tag in that file to denote both the revision of the rule-base and the revision of the code.
Confusion of lexical convention
By putting shell (sh, bash, ksh, csh, or perl) in the configuration file we confuse the quoting rules with op's lack of quotes. I see a larger number of misspelled rules when in-line scripts are included.
Owner of access verses owner of action
At a large site the Information Security review of the access configuration becomes far more complex and tedious as we must review the code of each in-line script for each change to the access policy. When we separately review the access policy (with one group) and the code used to grant the access (with application security) we get better feedback on both aspects.

This issue is not as clear at a small site where the op policy is coded by the same administrator that would code any in-line script.

It is not compatible with how other version of op do the same thing
I didn't like the use of backslashes or single quotes in the other versions of op, so I used curly braces. My bad.

Finally it is easy to make the rule-base work without them: here is an example from another version of op:

umount	...
	case $1 in
	cdrom) /sbin/umount /mnt/cdrom ;;
	dvd) /sbin/umount /mnt/dvd ;;
	burner) /sbin/umount /mnt/burner ;;
	*) echo "op: you do not have permission to unmount \'$1\'" ;;
	esac

In this version of op I would use an RE match on $1:

umount	/sbin/umount /mnt/$1 ;
	$1=^(cdrom|dvd|burner)$
	uid=root gid=operator

Or if I need to limit this to different Customer populations I might use two netgroups:

umount	/sbin/umount /mnt/$1 ;
	$1=^(cdrom|dvd)$
	netgroups=readers
	uid=root gid=operator

umount	/sbin/umount /mnt/$1 ;
	$1=^(burner)$
	netgroups=writers
	uid=root gid=operator

This also gives the Customer a better usage message under -l and -r because it shows the Customer only what they can do, and with a shell-like usage format:

$  op -l
op umount cdrom|dvd
...

Return to the main page.

Tips to build a better rule-base

The configuration of escalation rules is very important to the security of the host: one bad rule might give away superuser access to everyone on a host. An out-of-date rule (allowing someone access that should no longer have it) is bad enough, but it gets worse when a recycled login name (viz. "jsmith") grants a role to the new Ms. Smith that was intended for the previous Mr. Smith.

Start with the assumption that the rule-base is distributed based on the "type" of host that needs the rules, don't assume that the same files are installed on every host, or that the whole rule-base must be defined in a single file. This allows you to use the same mnemonic name on more than one class of server to do that same thing for different Customers. And it allows you to reuse whole files when you need them.

To allow Customers to have different roles used group membership. Leverage your accounting system to add/remove logins from groups: remove all the login names from the rule-base.

When that doesn't work fall back to netgroups (really). I know netgroups is old-school, but it solves several issues:

You can run out of groups
A login can only really be in 8 or 16 groups (depending on the version of the OS). A login can be in any number of netgroups.
Changing the rule-base my be harder than changing roles
In some cases local policy may restrict changes to the rule-base more tightly than changes to accounting (/etc/group, /etc/netgroup, and /etc/passwd are usually viewed as under the control of the local Admin, while the escalation rule-base may be under InfoSec.
Roles may change based on the "level" of the host
On non-production hosts some staff may be able to start/stop an application for testing that the should not be able to in production. But allowing their accounts in production may aid in trouble shooting production issues. Allow access via a netgroup, then limit the netgroups file in production a lot more -- with the same rule-base.
Login name in rule-bases are just plain evil
Don't blame anyone else when someone's role changed, but their access didn't.

Group on-demand tasks by facility (like application name) and verb (like "start", "stop", "kill") then code matching rules to take the correct action with the lowest privilege process that can get the task done. Don't accept any parameters that you don't really need.

Don't run anything as the superuser unless you must. Find a mortal login to run each facility.

For tasks you really must run as root be more picky about who can access the rule, and much more picky about parameters.

Never ask for a password unless the rule cannot possibly be used in automation. I use op to start and stop processes as each host boots -- if those rules ask for a password you can't do that.

You may choose to put in-line scripts in your local policy, but I think it's a better practice to use regular expression matches against $1 to pick the correct command within op itself. See in-line above and note that I almost never do that.

Tips to make each rule more clear

Op forces you to start each new escalation rule in column 1. After that the format is largely up to you. I try to phrase rules by this style guild:

Keep the args all on the first line.
This means the maintainer of the rule-base can see all the matching argument lists with grep:
grep '^target-rule' *.cf
Match all the parameters first options list (via $*, $N).
This allows the reader to see which rule may (may not) trap a specific request quickly. And it helps make all the rules easier to read.
Be explicit with the regular expression match for any parameter needed specifically.
Give op -l the information it needs to output a great usage message by matching whole words when you can:
	$1=^-[abc]$
is shorter for you to type, but represents the parameter as "$1", while
	$1=^(-a|-b|-c)$
represents the parameter as "-a|-b|-c". Later you might want to add another rule with the same mnemonic and another set of values for $1.
Customers are confused by parameters that are ignored so use $# with any $*.
This is easy to do with op so I always put $# in or !N to limit the number of command line parameters.
Add negative patterns to stop Bad Guys (via !* and !N)
For example adding ../ to path parameters is not what we want. You can thwart their attempts with an explicit rejection:
run	/opt/safe/bin/$1 ;
	!1=^\.\./,/\.\./
	...
Next add credentials groups, netgroups, and users options
That is the order op matches the rule, and it helps explain to the reader why her request is being rejected. The only users specification I like is "anyone":
	users=^.*$
Using groups is always way better (even for the superuser, include a group for that in your accounting system). Use netgroups, pam or a helmet specification before you fall into the trap of listing login names in your configuration.
Then any helmet, jacket, pam, or password options
These are also limits on who can take the role, so they should be explained before the process modifications. If any of these require information passed via an environment specification you should put that specification above it, or on the same line (as if it were an option).
	helmet=/usr/local/libexec/op/timebox $TIME_SPEC=0100-0459
Add the most important limits and modifications (viz. uid, gid)
If the point of the rule is to change the nice value of the process then that should come first in this section. We are trying to explain to the reader why we are using op to grant special access, so be clear about what is important and what is a "by the way". For example setting an environment variable might be either, by putting it at the top of the list you are telling those-that-cleanup-after-you what you were thinking.
Be explicit with session settings
Adding initgroups or PAM session and cleanup, to make a more complete environment, should be explicit in the rule (I never put those in a global DEFAULT stanza.

The cleanup setting is never used by sudo, very few PAM modules need it (pam_ssh.so really wants it). It costs an extra process to hold the session open as the super user, then close it after the escalated process exits.

Put common stuff in a local DEFAULT at the top of each file
After you have a file with all the rules for a project you might refactor the common parts into a default stanza at the top of the file. Put a comment on the rule that explains why we have our own default list. You should then put a comment at the end that reminds the reader of the default list at the top like:
# All rules using defaults from line 3.
If an auditor gets to that comment and it is not true then, like Lucy, you've got some `splaining to do.
Remind the reader about any imported DEFAULT
Be careful with DEFAULT in access.cf, since that one covers all the other files (without one). I'd put a comment to remind readers of that fact above that rules, as well as at the top of any file that really wants to use the defaults from access.cf.

Use another level of markup if you need it

Every rule-base file needs to be revision controlled, yet still allow for detailed customization based on the context presented by the target node. And the markup for that file must be clear enough to an auditor that it can pass a real review.

For example on some hosts the Apache program might be installed in another location (viz. /opt/apache-version). The native configuration file doesn't have an easy way to mark that up, but m4(1) sure does.

I use msrc(8) (see the HTML document) to send my rule-base out to each host. Each host only gets the rules it needs, and each rule may be marked-up to change parts of the specification on that host, or type of host. For example here is an abstraction of the rules to start or stop the npcguild webserver:

`# $Revision information...
DEFAULT

'include(class.m4)dnl
define(`local_PRIVGROUP',`ifelse(IS_TEST(HOST),yes,``^wiz$,'',``'')')dnl
`web	/opt/npcguild/sbin/web $1 ;
	$1=^(configtest|start|stop|restart)$
	users=^flame$,^root$
	groups='local_PRIVGROUP`^wheel$
	uid=httpd gid=proxy
'dnl
One tip here: put any m4 ifelse markup above the rule that uses it (as above). Any sanity processor may be taught to ignore "local_PRIVGROUP" and the m4 quotes, it it is harder to ignore the arbitrary expressions in the ifelse logic. The alternative is to process every rule-base file though hxmd for every node that might receive it.

Such markup allows the same source file (aka web.cf.host) to be sent to both test and production hosts, but end up with additional groups on the test hosts. Likewise I might tune any other aspect of the rule with similar markup.

The use of a heavy duty configuration management structure, like msrc, in place of a kludge (viz. replicating the same file to every host) makes a world of difference when you manage more than 10 machines, or more than 2 applications per host.

Without reguard for how complex the management of moving the contents to each host is, if you are just moving the same file to every host -- you are not solving the problem.

Why use op when we have ...?

I believe the short answer is "complexity".

To understand what a sudo or super rule does you must know everything about the context of the invocation: the IP address of the host, the contents of (seemingly unrelated) environment variables and the whole of the sudoers (or super.tab) file.

Keeping lists of allowed login names in a file is asking for trouble: it will always be out of date and updates will be painful. This is caused by the whole-sale lack of certainty in the use of each definition. This is also why I almost always use group membership as the key to access in my rule-base: my accounting system lets me add (delete) groups from my Customer's logins pretty much at will. If your accounting system is lacking you should invest some time in getting a better one, not fight tactical issues forever.

Contrast this to op's stanza definitions. Most of what you need to know to explain a rule is expressed in a short stanza all in the same place in the file. To eliminate any impact from a DEFAULT stanza just add a one above the rule:

DEFAULT

clean-rule	...
We know for certain that the "clean-rule" is not modified by any taint from the DEFAULT in access.cf or above it in the current file.

There is no limit to the unexpected impact a "simple" change might have in sudoers. Using the escalation configuration to change the rules based on the host it is installed on is a poor excuse for configuration management -- if you want to machines to share all the same files, then you really want one bigger machine, buy one. The larger machine is cheaper than the first security issue caused by the lack of control over your escalation policy.

It is far more secure to to configure exactly the rules needed on each and every host: not the same superset on all hosts.

Then use op's -S option to sanity check each host for missing programs, directories, or nonsensical rules. You should be sanity checking your access.cf and/or your sudoers files. And you should be doing it on each host periodically.

Compatibility

Op version 2.x is as compatible with version 1.x as I can make it. I believe any existing version 1 configuration will do exactly the same thing under my version, if you rename the file to access.cf from what ever else it was named.

The path to op and the configuration directory are now under /usr/local because local convention requires it. There is no good reason you could not recompile the program to live under some other location, override RUN_LIB in the build process.

The older parser tried to use the universal escape (backslash, \) to quote dollar and comma with limited success. Now we use a double-comma, and a double-dollar to quote those characters. We don't make backslash special except after a dollar (e.g. $\t). The use of open curly ({) and close curly (}) to quote an in-line script is not identical to recent branches of op, but I believe it is clear, and avoids any use of backslash inside the script. (It is always safe to put a semicolon before a leading close curly in a shell script or perl program.)

In the following sections I point out how to convert from other escalation programs to op.

Moving from super to op

Super filters the environment with some hard coded rules (for TERM, LINES, and some others. The DEFAULT stanza below should make some of that less a problem:
DEFAULT	# super compat mode
	environment=^LINES=[0-9]+$,^COLUMNS=[0-9]+$,TERM=[-/:+._a-zA-Z0-9]+$
	$IFS=$\s$\t$\n
	$USER=$t $LOGNAME=$t $HOME=$H
	$ORIG_USER=$l $ORIG_LOGNAME=$l $ORIG_HOME=$h
	$ORG_SHELL=${SHELL}
	$PATH=/bin:/usr/bin
	$SUPERCMD=$0

There is no way to emulate super's shebang #! magic with op. Just use "op script" and let the rule set the program path. This is more secure in the long run.

Moving from sudo to op

First, I think you'll find the conversion of the rule-base much easier than you might first believe. The sudoers files I've helped convert tend to range from wildly insecure to limitless, allowing unlimited system access to nearly every user (usually inadvertently).

First stop using "sudo ..." as a prefix for "just run this as root" and start using other mortal (application) logins, and limited commands. Then see the configuration tips above.

To set an environment that looks like sudo's:

DEFAULT	# look more like sudo
	$USER=$t $HOME=$H
	$SUDO_COMMAND=$_ $SUDO_USER=$l $SUDO_UID=$L $SUDO_GID=$R
	# PS1=${SUDO_PS1}

If you want more command-line compatibility you can look for the sudop command that tries to make op look more like sudo.

Moving from pfexec to op

Just get off the drugs! If you've been putting stuff in /etc/user_attr on you hosts you are in a hell all of your own making.

The whole getuserattr(3) manual page stinks of YANGNI code (like the many "reserved for future use" fields in the structures). If I want to keep a list of who can do what in a file, I'll use op's configuration and skip all the extra cheese in a generic feature thats looking for a problem to solve.

With pfexec it is way too easy to give away more access than you thought you were. And you always have to manage roles by login name, which is the hardest way to manage escalation rules.

Build op rules for the roles people really need and skip the generic functions that give away the show.

Moving from su2 to op

The rule to emulate a "~www/.su2rc", is a fine example:
www	MAGIC_SHELL ;
	uid=www initgroups=www
	groups=your-list-here

I agree that it is useful for a user to shift to a shared login for some tasks: but I'd rather not make it easy for Customers to Trojan each other without a setuid-bit on the created file.

Alphabetical list of expander macros

$A
The gid list for the client process.
$C
The configuration directory, in other words the directory containing the access.cf file. The dirname of $c.
$D
An open read-only file descriptor on the directory part of file.
$E
If op has a setuid bit on it, this is the uid that owns the file.
$F
A read-write file descriptor for file. This is represented an a small integer for shell file duplication, as in 1>&%F.
$G
The group-id (gid) for the group specified on the command-line.
$H
The home directory of the target login.
$I
The target uid for any initgroups(3) call from the rule.
$L
The client login's uid.
$N
The new gid list given to setgroups(2).
$O
The target real gid.
$P
The uid of any PAM session.
$R
The clients real gid.
$S
The trusted path to the shell, viz. /bin/sh. Which maybe overridden by an environment specification of $SHELL.
$T
The target uid.
$U
The uid of the login provided to -u. When this is requested the command specification must include that option.
$W
The line number of the configuration entry that defined the access rule.
$a
The group list for the client process.
$c
The path to the default configuration file, also output under -V. Usually "/usr/local/lib/op/access.cf".
$d
An open read-only file descriptor on the directory part of file specified under -f.
$e
The login name of the effective uid op is setuid to, usually "root".
$f
The file specified on the command line.
$g
The group name specified by group on the command-line.
$h
The home directory of the client login.
$i
The target login for any initgroups(3) call from the rule.
$l
The client login name. This is taken from $LOGNAME or $USER if either maps to the real uid. Otherwise the first successful reverse lookup of the real uid is taken.
$n
The new group list given to setgroups(2).
$o
The target real group name.
$p
The login name of any PAM session.
$r
The clients real group name.
$s
The script body specified for the current rule. (When the first parameter is a curly-brace form.)
$t
The target login name.
$u
The login provided to -u by name. When this is requested the command specification must include that option. If the specification is a uid (that is a number) it must resolve to a valid login.
$w
The name of the configuration file that defined the access rule.
$~
The home directory of the effective uid op was executed under (usually root's home directory). Op may be installed setuid to another user (usually by a different name): in this case it acts as a less powerful application service, but still retains much of its effectiveness.
${ENV}
The value of the environment variable ENV as it was presented in the original environment.
$number
The positional parameter specified. Not that $0 is the mnemonic name selected.
$#
The count of the arguments provided.
$$
A single literal dollar sign ($).
$.
Put in a word break. This is not usually needed from the context of a configuration file (as white-space is more clear), but may be used by automation that is generating a rule-base.
$*
Expand to the positional parameters as a single catenated word. This is useful when a rule wants to group the separate words given to it into a log message (for example).
$@
Expand to the positional parameters with word breaks preserved. This is useful when a rule wants to pass the parameters it was provided on to the next command as-given.
$!
$&
These internal expanders may change without notice, so don't apply them. FYI $! is the same as $@, but skips the first word in the parameter list. Also $& is the same as $*, but skips the first word in the parameter list as well.
$\escape
Allow any tr(1) backslash notation to specify a special character. The letter 's' is also allowed for a space character.
$|
The empty string: useful to remove the special "end of parameter list" meaning from either an ampersand or semicolon.
$_
The target script or shell (under MAGIC_SHELL). This may not be used to define itself, of course. This is handy to allow a environment variable to pass the target program path on to a helmet or jacket.

$Id: refs.html,v 2.68 2010/07/28 22:46:39 ksb Exp $