# $Id: TODO,v 3.28 2013/07/24 17:59:11 ksb Exp $ http://www.gnu.org/s/parallel/man.html submit a "bug report" updating the xapply comparison. Parallel's -P+N and -P-N are kind of nifty, but I don't think it is worth adding to xapply. Tell him "remote" is spelled as "remove" in his manual page too. Implement other signals for %p: viz. TERM to kill the running tasks. pending tasks, then waits for them. Maybe INFO, if our host supports it. If -P is specifed and $PARALLEL is NOT set (or empty) we should fetch sysconf(_SC_NPROCESSORS_ONLN) and use that, then set $PARALLEL to 1, perhaps. The recursive xapplys would not exponentailly do the same thing (which fork-bomb's our host). Possible steps to make xapply more robust: Currently we only ask ptbw for tokens once. We should come back to ask for more when we didn't get the max and we are out of tokens to run tasks (with more waiting). I just don't have the energy to do it (yet). Also I think we could free more resources earlier, maybe. This might imply an option to _always_ poll ptbw for each command, which slows the start of commands (a tiny bit) and beats ptbw pretty hard. Allow %g to access a gtfw instance to build a scoped temporary file. This is why hxmd (msrc) takes a -g option. And %G should build a new one for every iteration, maybe. Allow the dicer to access the environment. %{{TERM}} or %[{PATH}:1] would be useful. It might be confusing to allow this because it overloads the curly to mean 2 things. %{TERM} means %te, which is wrong and error-prone. We just let the shell do it, but it doesn't have the dicer/mixer. The word- around is to use an internal oue(1l) to dice/mix for you. Nits: We allocate _way_ too much memory, sanity check -P's (given or implid) parameter for over-flow (against sycconfig, or that times 8)? $ xapply -P100000 -f ... At compile time the first one that succeeds is the LIMIT (-D it) bash -c "ulimit -u" sysctl kern.maxprocperuid sysctl kern.maxproc (divided by 3?) CHILD_MAX | MAXUPRC from or echo 65536 so we should make -V report the default max process limit we picked, and maybe the one we would use. Code bugs: we just still might smack the stack in some cases, but I've not been able to make it happen in a "real world" case. If we do, then we were tring to exceed ARG_MAX (aka NCARGS) and we couldn't execute the task anyway. Helpers to code: The options to binpack, oue, and glob to support xapply work well. Other helpers like those might be a very good idea. A program to take a CSV from Excel (or the like) and turn it into -z (\000, NUL) separated fields. This should also pad short lines out to an optional fixed number of fields. I don't know all the rules for CSV export files, but I'm sure someone out there does. If we have enough information we could even process .xls or .xlsx files directly. I once had a perl program that extracted from .xls, but not .xlsx files. -- ksb, Mar 2013