Op
uses syslog
(3) to
log every access attempt to LOG_AUTH
and
it always uses the name "op" (even when called by another name).
I direct this to the system console, not a local file or network service. Then I pick it up on the serial port and log it through my console server (see conserver.com for a more widely supported version than mine). This prevents Bad Guys from covering their tracks by network attack or by deletion of the log files (since they are not on a host that is on the same network as the host they are attacking).
I then produce reports that show who is assuming which role and how often they do it. This helps close-the-loop on broken applications, abuse of access, and some other political issues.
While this is pretty easy to setup, it is beyond the scope of this document.
Return to the main page.
Op
's configuration file syntax is designed to
be overly simple.
By keeping the syntax super simple we are trying to limit the
chance of complex expressions leading to breaches in security policy.
We want the op
"firewall" to
keep Bad Guys from gaining access they shouldn't have, while
allowing Good Customers to work without useless constraints.
I looked at the usage of many other similar programs
(super
,
sudo
,
pfexec
,
other versions of op
) to see what features
they provided that this version of op
lacked.
After that review I added the in-line script feature, which I still
believe is a mistake.
#
) to end-of-line, and are removed
like most UNIX ™ configuration files.
The octothorp is not special when contained in
a word
.
A word
is a sequence of non-white-space
characters. A mnemonic
is limited
in that it must start in column 1, and that, in
addition to white-space, it is also terminated by either a
semicolon (;
)
or an ampersand (&
).
All words are broken at white-space: there is no quote convention
to protect white-space inside a word
.
There is one other lexical construct: an in-line script.
An in-line script is only parsed as the next item immediately after
a mnemonic
. It groups shell code between
curly braces to form a block of shell code.
It begins with an open curly ({
) as
a word
and ends at the next line which starts with
a close curly (}
) as the first non-white-space
character. The delimiting curly braces are not included in the resulting
text.
As a BNF:
file
::=rule
* ;
rule
::=DEFAULT
option
* |mnemonic
command
arg
* (;
|&
)option
* ;
option
::=word
;
arg
::=word
;
command
::=word
| '{' ... '\n' white-space * '}' ;
arg
or option
words as they are read. The semantic analysis is deferred until after
all the files are input, up until that time all arg
,
option
, and command
terms are just words
.
The special DEFAULT
form omits the
command
part from the rule stanza: this is
because it only expresses default options
for subsequent rules (so it doesn't need any args
).
The semicolon (;
) that terminates
the arg
list may be expressed as
an ampersand (&
) to force a
daemon
option into
the option
list. This shorthand
is clearer in the rule-base when it is repeated for many commands.
In the normal command mode op
examines each
rule in turn looking for a literal string match for the requested
mnemonic
against all those listed in the
rule-base. Only if one matches are any of
the options
examined. At that point
only the $# and $N
options are analyzed.
Only when the command requested has the correct number of parameters
and matches the command-line arguments are any
DEFAULT
options merged into the rule.
All subsequent authorization and command construction only
looks at that single rule. For all intents the rest of the
rule-base is forgotten.
When a sanity check (under -S
),
list (under -l
) or other report
(under -r
, or -w
)
request is processed each rule has its DEFAULT merged before it
is processed. This could produce insane results as and
default $#
would be applied to
liberally: use -S
to catch this (and
many other botches).
Return to the main page.
#!/bin/sh # $Id: showme.sh,v 2.29 2008/12/31 20:51:12 ksb Exp $ # $Doc: sed -e 's/&/A''MPamp;/g' -e 's/</\\</g' -e 's/>/\\>/g' <%f| sed -e 's/[A][M][P]/\\&/g' ( if [ $# -eq 0 ] ; then echo "Process #$$ was executed as \"$0\", with no arguments" else echo "Process #$$ was executed as \"$0\", with arguments of" for A do echo "$A" |sed -e 's/\([\\"$`]\)/\\\1/g' -e 's/^/ "/;s/$/"/' done fi echo " as login" `id -rnu`", effective" `id -nu`" [uid" \ `id -ru`", "`id -u`"]" echo " with group" `id -rng`", effective" `id -ng`" [gid" \ `id -rg`", "`id -g`"]" echo " supplementary groups:" `id -nG` "[`id -G`]" echo ' $PWD='`pwd` echo ' umask '`umask` renice 0 $$ 2>&1 |sed -e 's/old/nice/' -e 's/,.*//' env ) |${PAGER:-more} exit 0
Return to the main page.
op
to deny or allow an escalation.
Jacket processes take much the same arguments, but may
persist while the escalated access is running. The template
code in this section may be adapted to serve as either a
jacker or a helmet.
The example jacket parses the command-line options, offers
-h
, and -V
as
all good user interfaces should, and has comments in spots where
you should edit to make your checks.
Copy the jacket.pl file to a working file,
edit it and search for each check-point to add your checks.
/CHECKS AND REPARATIONS
stdout
, before any
exit (as all the check above did).
/CAPTURE START DATA
exit
'd.
/CLEANUP
$status
holds the exit
code, while
$?
holds the raw wait
status. Log anything you need to log, cleanup anything as needed.
If you need to exit non-zero because the access failed this would be the place.
$Id: ...
) to your local flavor.
Feel free to leave a credit in for the template, if you like.
Check it into your local revision control system and install it as local policy demands.
#!/usr/bin/perl -T # An example perl jacket/helmet script (parses the options for you). (ksb) # Note that this code will most be run under the taint rules, see perlsec(1). # KS Braunsdorf, at the NPCGuild.org # $Doc: sed -e 's/&/A''MPamp;/g' -e 's/</\\</g' -e 's/>/\\>/g' <%f| sed -e 's/[A][M][P]/\\&/g' use lib '/usr/local/lib/sac/perl'.join('.', unpack('c*', $^V)), '/usr/local/lib/sac'; use Getopt::Std; use strict; my($hisPath) = $ENV{'PATH'}; $ENV{'PATH'} = '/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin'; my($progname, %opts, $usage); $progname = $0; $progname =~ s/.*\///; getopts("VhP:u:g:f:R:C:", \%opts); $usage = "$progname: usage [-P pid] [-u user] [-g group] [-f file] [-R root] -C config -- mnemonic program euid:egid cred_type:cred"; if ($opts{'V'}) { print "$progname: ", '$Id: refs.html,v 2.68 2010/07/28 22:46:39 ksb Exp $', "\n"; exit 0; } if ($opts{'h'}) { print "$usage\n", "C config which op configuration file sourced the rule\n", "f file the file specification given to op, as an absolute path\n", "g group the group specification given to op\n", "h standard help output\n", "P pid the process-id of the jacketed process (only as a jacket)\n", "R root the directory we chrooted under\n", "u user the user specification given to op\n", "V standard version output\n", "mnemonic the requested mnemonic\n", "program the program mapped from the mnemonic\n", "euid:egid the computed effective uid and gid\n", "cred_type the credential type that granted access (groups, users, or netgroups)\n", "cred the matching group, login, or netgroup\n"; exit 0; } my($MNEMONIC, $PROGRAM); shift @ARGV if ('--' eq $ARGV[0]); if (scalar(@ARGV) != 4) { print STDERR "$progname: exactly 4 positional parameters required\n"; print "64\n" if $opts{'P'}; exit 64; } if ($ARGV[0] !~ m|^([-/\@\w.]+)$|o) { print STDERR "$progname: mnemonic is zero width, or spelled badly\n"; print "64\n" if $opts{'P'}; exit 64; } $MNEMONIC = $1; if ($ARGV[1] !~ m|^([-/\@\w.]+)$|o) { print STDERR "$progname: program specification looks bogus\n"; print "64\n" if $opts{'P'}; exit 64; } $PROGRAM = $1; if ($ARGV[2] !~ m/^([^:]*):([^:]*)$/o) { print STDERR "$progname: euid:egid $ARGV[2] missing colon\n"; print "65\n" if $opts{'P'}; exit 65; } my($EUID, $EGID) = ($1, $2); if ($ARGV[3] !~ m/^([^:]*):([^:]*)$/o) { print STDERR "$progname: cred_type:cred $ARGV[3] missing colon\n"; print "76\n" if $opts{'P'}; exit 76; } my($CRED_TYPE, $CRED) = ($1, $2); # Now $MNEMONIC is mnemonic, $PROGRAM is program, also $EUID, $EGID, # $CRED_TYPE, $CRED are set -- so make your checks now. # # There are 5 actions you can take, and leading white-space is ignored: # 1) As above you can output an exit code to the process: # print "120\n"; # 2) You can set an environment variable [be sure to backslash the dollar]: # print "\$FOO=bar\n" # The same line without a value adds the client's $FOO (as presented): # print "\$FOO\n"; # 3) You can remove any environment variable: # print "-FOO\n"; # 4) You can send a comment which op will output only if -DDEBUG was set # when op was built [to help you, Mrs. Admin]: # print "# debug comment\n"; # 5) Use op to signal your displeasure with words, making op prefix your # comment with "op: jacket: " ("op: helmet: "): # print "Permission lost!\n"; # (This suggests an exit code of EX_PROTOCOL.) # # Put your checks and payload here. Output any commands to the co-process, # be sure to send a non-zero exit code if you want to stop the access! # CHECKS AND REPARATIONS #e.g. check LDAP, kerberos, RADIUS, or time-of-day limits here. # If we are a helmet you can just exit, if you exit non-zero op will view that # as a failure to complete the access check, so it won't allow the access. exit 0 unless $opts{'P'}; # We must be a jacket, and the requested access is not yet running. # You could set a timer here, or capture the start/stop times etc. # CAPTURE START DATA #e.g. call time or set an interval timer #e.g. block signals # Let the new process continue by closing stdout, if the last exitcode # you wrote to stdout was non-zero op won't run the command, I promise. open STDOUT, ">/dev/null"; # We can wait for the process to exit, we are in perl because the shell # (ksh,sh) can't get the exit code from a process it didn't start. my($kid, $status); $kid = waitpid $opts{'P'}, 0; $status = $? >> 8; # Do any cleanup you want to do here, after that the jacket's task is complete. # Mail a report, syslog something special, restart a process you stopped # before the rule ran, what ever you need. On a failure you should exit # with a non-zero code here. # CLEANUP #e.g.: print STDERR "$kid exited with $status\n"; #e.g.: use Sys::Syslog; ... log something clever # This is the exit code that goes back to the client, since this jacket # became the op process (as all jackets do). exit 0;
We could drop uid to a vanilla login (like nobody
)
as soon as we don't need special permissions. That would be a good idea,
if you can manage it. There is a fine line here, you don't really want
to drop to the original login's uid, because then they can mess with
your process and the point of a jacket is that the client can't
ptrace
(2) you.
This is also a sane way to get a PAM session close, as in the normal
op
process flow we execve
the escalated program, so op
not around to
call pam_close_session
(3).
Return to the main page.
%A Tom Christiansen %T Op: A Flexible Tool for Restricted Superuser Access %P 89-94 %I USENIX %B Large Installation Systems Administration III Workshop Proceedings %D September 7-8, 1989 %C Austin, TX %W Convex Computer Corporation
For the reasons below I try not to put in-line scripts in any access rule-base.
/usr/local/libexec/op
) then
the Bad Guy can't see the text of the script to aid her in
subornation of the code. If you put the code in-line it shows
up in ps
output while running.
-V
and each non-program file holds a revision tag in comments at
the top of the files. By putting code without
the -V
hook in the configuration file
I am overloading the revision tag in that file to denote
both the revision of the rule-base and the revision of the code.
sh
,
bash
, ksh
,
csh
, or perl
) in
the configuration file we confuse the quoting rules with
op
's lack of quotes. I see a larger
number of misspelled rules when in-line scripts are included.
This issue is not as clear at a small site where
the op
policy is coded by
the same administrator that would code any in-line script.
op
do the same thing
op
, so I used curly braces. My bad.
Finally it is easy to make the rule-base work without them:
here is an example from another version of op
:
umount ... case $1 in cdrom) /sbin/umount /mnt/cdrom ;; dvd) /sbin/umount /mnt/dvd ;; burner) /sbin/umount /mnt/burner ;; *) echo "op: you do not have permission to unmount \'$1\'" ;; esac
In this version of op
I would use an RE match
on $1
:
umount /sbin/umount /mnt/$1 ; $1=^(cdrom|dvd|burner)$ uid=root gid=operator
Or if I need to limit this to different Customer populations I might use two netgroups:
umount /sbin/umount /mnt/$1 ; $1=^(cdrom|dvd)$ netgroups=readers uid=root gid=operator umount /sbin/umount /mnt/$1 ; $1=^(burner)$ netgroups=writers uid=root gid=operator
This also gives the Customer a better usage message under
-l
and -r
because
it shows the Customer only what they can do, and with a shell-like
usage format:
$ op -l op umount cdrom|dvd ...
Return to the main page.
Start with the assumption that the rule-base is distributed based on the "type" of host that needs the rules, don't assume that the same files are installed on every host, or that the whole rule-base must be defined in a single file. This allows you to use the same mnemonic name on more than one class of server to do that same thing for different Customers. And it allows you to reuse whole files when you need them.
To allow Customers to have different roles used group membership. Leverage your accounting system to add/remove logins from groups: remove all the login names from the rule-base.
When that doesn't work fall back to netgroups (really). I know netgroups is old-school, but it solves several issues:
/etc/group
,
/etc/netgroup
, and /etc/passwd
are usually viewed as under the control of the local Admin, while the
escalation rule-base may be under InfoSec.
Group on-demand tasks by facility (like application name) and verb (like "start", "stop", "kill") then code matching rules to take the correct action with the lowest privilege process that can get the task done. Don't accept any parameters that you don't really need.
Don't run anything as the superuser unless you must. Find a mortal login to run each facility.
For tasks you really must run as root be more picky about who can access the rule, and much more picky about parameters.
Never ask for a password unless the rule cannot possibly be used
in automation. I use op
to start and stop
processes as each host boots -- if those rules ask for a password
you can't do that.
You may choose to put in-line scripts in your local policy,
but I think it's a better practice to use regular expression
matches against $1
to pick the correct command within op
itself. See in-line above and note that
I almost never do that.
Op
forces you to start each new escalation
rule in column 1. After that the format is largely up to you.
I try to phrase rules by this style guild:
args
all on the first line.
grep
:
grep '^target-rule' *.cf
options
list (via $*
,
$
N
).
op
-l
the
information it needs to output a great usage message by matching
whole words when you can:
is shorter for you to type, but represents the parameter as "$1", while$1=^-[abc]$
represents the parameter as "-a|-b|-c". Later you might want to add another rule with the same$1=^(-a|-b|-c)$
mnemonic
and another set of
values for $1
.
$#
with any $*
.
op
so
I always put $#
in or
!
N
to limit the
number of command line parameters.
!*
and
!
N
)
../
to path parameters is
not what we want. You can thwart their attempts with an
explicit rejection:
run /opt/safe/bin/$1 ; !1=^\.\./,/\.\./ ...
groups
,
netgroups
, and
users
options
op
matches the rule, and
it helps explain to the reader why her request is being rejected.
The only users
specification I like is "anyone":
Usingusers=^.*$
groups
is always way better (even for
the superuser, include a group for that in your accounting system).
Use netgroups
,
pam
or
a helmet
specification before you fall into
the trap of listing login names in your configuration.
helmet
, jacket
, pam
, or password
options
helmet=/usr/local/libexec/op/timebox $TIME_SPEC=0100-0459
uid
, gid
)
nice
value of the process then that should come first in this section.
We are trying to explain to the reader why
we are using op
to grant special access,
so be clear about what is important and what is a "by the way".
For example setting an environment variable might be either,
by putting it at the top of the list you are
telling those-that-cleanup-after-you what you were thinking.
initgroups
or
PAM
session
and cleanup
,
to make a more complete
environment, should be explicit in the rule (I never put those in a global
DEFAULT
stanza.
The cleanup
setting is never used by
sudo
, very few PAM modules need it (pam_ssh.so
really wants it). It costs an extra process to hold the session open
as the super user, then close it after the escalated process exits.
DEFAULT
at the top of each file
If an auditor gets to that comment and it is not true then, like Lucy, you've got some `splaining to do.# All rules using defaults from line 3.
DEFAULT
DEFAULT
in
access.cf
, since that one covers all the other
files (without one).
I'd put a comment to remind readers of that fact above that rules, as well as
at the top of any file that really wants to use the defaults from
access.cf
.
For example on some hosts the Apache program might be installed in another
location (viz. /opt/apache-version
).
The native configuration file doesn't have an easy way to mark that up, but
m4
(1) sure
does.
I use msrc
(8)
(see the HTML document) to
send my rule-base out to each host. Each host only gets the rules
it needs, and each rule may be marked-up to change parts of the
specification on that host, or type of host.
For example here is an abstraction of the rules to start or stop
the npcguild webserver:
One tip here: put any`# $Revision information... DEFAULT 'include(class.m4)dnl define(`local_PRIVGROUP',`ifelse(IS_TEST(HOST),yes,``^wiz$,'',``'')')dnl `web /opt/npcguild/sbin/web $1 ; $1=^(configtest|start|stop|restart)$ users=^flame$,^root$ groups='local_PRIVGROUP`^wheel$ uid=httpd gid=proxy 'dnl
m4
ifelse
markup above
the rule that uses it (as above).
Any sanity processor may be taught to ignore "local_PRIVGROUP" and
the m4
quotes, it it is harder to ignore the
arbitrary expressions in the ifelse
logic.
The alternative is to process every rule-base file though
hxmd
for every node that might receive it.
Such markup allows the same source file
(aka web.cf.host
) to be sent to
both test and production hosts, but end up with additional
groups
on the test hosts.
Likewise I might tune any other aspect of the rule with similar
markup.
The use of a heavy duty configuration management structure,
like msrc
, in place of a kludge
(viz. replicating the same file to every host) makes a world of
difference when you manage more than 10 machines, or more
than 2 applications per host.
Without reguard for how complex the management of moving the contents to each host is, if you are just moving the same file to every host -- you are not solving the problem.
To understand what a sudo
or
super
rule does you must know everything
about the context of the invocation: the IP address of the host, the
contents of (seemingly unrelated) environment variables and the whole
of the sudoers
(or super.tab
) file.
Keeping lists of allowed login names in a file is asking for trouble: it will always be out of date and updates will be painful. This is caused by the whole-sale lack of certainty in the use of each definition. This is also why I almost always use group membership as the key to access in my rule-base: my accounting system lets me add (delete) groups from my Customer's logins pretty much at will. If your accounting system is lacking you should invest some time in getting a better one, not fight tactical issues forever.
Contrast this to op
's stanza definitions.
Most of what you need to know to explain a rule is expressed in a
short stanza all in the same place in the file. To eliminate any
impact from a DEFAULT
stanza just add
a one above the rule:
We know for certain that the "clean-rule" is not modified by any taint from theDEFAULT clean-rule ...
DEFAULT
in
access.cf
or above it in the current file.
There is no limit to the unexpected impact a "simple" change might
have in sudoers
. Using the escalation
configuration to change the rules based on the host it is installed
on is a poor excuse for configuration management -- if you want
to machines to share all the same files, then you really want one
bigger machine, buy one. The larger machine is cheaper than the
first security issue caused by the lack of control over
your escalation policy.
It is far more secure to to configure exactly the rules needed on each and every host: not the same superset on all hosts.
Then use op
's -S
option
to sanity check each host for missing programs, directories, or nonsensical
rules.
You should be sanity checking your access.cf
and/or your sudoers
files. And you should be
doing it on each host periodically.
Op
version 2.x is as compatible with version
1.x as I can make it. I believe any existing version 1 configuration
will do exactly the same thing under my version, if you rename the
file to access.cf
from what ever else it
was named.
The path to op
and the configuration directory
are now under /usr/local
because local
convention requires it. There is no good reason you could not
recompile the program to live under some other location, override
RUN_LIB
in the build process.
The older parser tried to
use the universal escape (backslash, \
)
to quote dollar and comma with limited success. Now we use a
double-comma, and a double-dollar to quote those characters. We don't
make backslash special except after
a dollar (e.g. $\t
).
The use of open curly ({
) and close curly
(}
) to quote an in-line script is not
identical to recent branches of op
, but I
believe it is clear, and avoids any use of backslash inside the script.
(It is always safe to put a semicolon before a leading close curly in a shell
script or perl
program.)
In the following sections I point out how to convert from other
escalation programs to op
.
super
to op
Super
filters the environment with some
hard coded rules (for TERM
,
LINES
, and some others.
The DEFAULT
stanza below should make
some of that less a problem:
DEFAULT # super compat mode environment=^LINES=[0-9]+$,^COLUMNS=[0-9]+$,TERM=[-/:+._a-zA-Z0-9]+$ $IFS=$\s$\t$\n $USER=$t $LOGNAME=$t $HOME=$H $ORIG_USER=$l $ORIG_LOGNAME=$l $ORIG_HOME=$h $ORG_SHELL=${SHELL} $PATH=/bin:/usr/bin $SUPERCMD=$0
There is no way to emulate super
's shebang
#!
magic with op
.
Just use "op script" and let the rule set the program path.
This is more secure in the long run.
sudo
to op
sudoers
files I've helped convert
tend to range from wildly insecure to limitless, allowing unlimited
system access to nearly every user (usually inadvertently).
First stop using "sudo ..." as a prefix for "just run this as root" and start using other mortal (application) logins, and limited commands. Then see the configuration tips above.
To set an environment that looks like sudo
's:
DEFAULT # look more like sudo $USER=$t $HOME=$H $SUDO_COMMAND=$_ $SUDO_USER=$l $SUDO_UID=$L $SUDO_GID=$R # PS1=${SUDO_PS1}
If you want more command-line compatibility you can look for the
sudop
command that tries to make
op
look more like sudo
.
pfexec
to op
/etc/user_attr
on you hosts you are in
a hell all of your own making.
The whole getuserattr
(3) manual page
stinks of YANGNI
code (like the many "reserved for future use" fields in the structures).
If I want to keep a list of who can do what in a file, I'll
use op
's configuration and skip all the
extra cheese in a generic feature thats looking for
a problem to solve.
With pfexec
it is way too easy to give away
more access than you thought you were.
And you always have to manage roles by login name, which is
the hardest way to manage escalation rules.
Build op
rules for the roles people really
need and skip the generic functions that give away the show.
su2
to op
www MAGIC_SHELL ; uid=www initgroups=www groups=
your-list-here
I agree that it is useful for a user to shift to a shared login for some tasks: but I'd rather not make it easy for Customers to Trojan each other without a setuid-bit on the created file.
$A
$C
access.cf
file.
The dirname
of $c
.
$D
file
.
$E
op
has a setuid bit on it, this is the
uid that owns the file.
$F
file
.
This is represented an a small integer for shell file duplication, as
in 1>&%F
.
$G
group
specified on the command-line.
$H
$I
initgroups
(3) call
from the rule.
$L
$N
setgroups
(2).
$O
$P
$R
$S
/bin/sh
.
Which maybe overridden by an environment specification of
$SHELL
.
$T
$U
login
provided to
-u
. When this is requested the command
specification must include that option.
$W
$a
$c
-V
. Usually "/usr/local/lib/op/access.cf".
$d
file
specified under -f
.
$e
op
is
setuid to, usually "root".
$f
file
specified on the command line.
$g
group
on
the command-line.
$h
$i
initgroups
(3) call
from the rule.
$l
$n
setgroups
(2).
$o
$p
$r
$s
script
body specified for the current
rule. (When the first parameter is a curly-brace form.)
$t
$u
login
provided to
-u
by name. When this is requested the command
specification must include that option. If the specification
is a uid (that is a number) it must resolve to a valid login.
$w
$~
op
was
executed under (usually root's home directory). Op
may be installed setuid to another user (usually by a different name):
in this case it acts as a less powerful application service, but still
retains much of its effectiveness.
${
ENV
}
ENV
as it was presented in the original environment.
$
number
$0
is the mnemonic name selected.
$#
$$
$.
$*
$@
$!
$&
$!
is the same as
$@
, but skips the first word in the parameter
list.
Also $&
is the same as
$*
, but skips the first word in the parameter
list as well.
$\
escape
tr
(1) backslash notation to
specify a special character. The letter 's' is also allowed for
a space character.
$|
$_
MAGIC_SHELL
).
This may not be used to define itself, of course. This is
handy to allow a environment variable to pass the target program path on
to a helmet or jacket.
$Id: refs.html,v 2.68 2010/07/28 22:46:39 ksb Exp $