I just wrote a series of scripts that run batch GALFIT fits. Since it's boring to parallelise this manually (e.g. splitting the list of input ID's into smaller lists, then running same script separately for each sublist, or setting the chunks of IDs from command line), I wrote a simple multithreading routine using multiprocessing. It's quite trivial, as the parallel processes here don't have to communicate (unless I see some terrible race conditions arising). The ID lists are made by hand, but it'll be easy to split the list into smaller chunks of desired length by looping through it, or by using list comprehension.