Page Tools

    User Documentation

    Since 0.3 release, Adage features two modes :

    • a single-deployment mode, where you use Adage as with previous releases. It deploys your application then exit.
    • an advanced daemon mode. You launch it using adage -d once, then you communicate with it using which provides the same commandline options. If you're lazy, the client can even automatically launch Adage daemon for you.

    In both cases, at the end of your deployment scripts ./ and ./ will be generated and symlinked in the directory where you launched Adage.

    To launch your application, you'll need an application description file, a control parameters file to set various options for the deployment, and finally a resource description describing the available hosts for your deployment.

    Self-included help says : <xterm> $adage -help usage: -a <appl> -c <ctrl> [ -r <resources> | -j oarjobid | -g oargridjobid | -o $OAR_NODEFILE ] -D <key> -F <outfile> [ -p | -n | -x ] -h

    Command-line options: -a, –appl <file> specific application description.

    If used multiple times, all apps will be merged in a single deployment.

    -c, –ctrl <file> control parameters (planner to use, placement/architecture contraints, ..) -r, –res <file> resources description. -R, –addres <file> append resources description -o, –oarnodefile <file> get resources from a file containing node ids like $OAR_NODEFILE -j, –jobid <job id> get resources from an oar job id. -J, –addjobid <job id> append resources from an oar job id. -g, –gridjobid <job id> get resources from an oargrid job id. -G, –addgridjobid <job id> append resources from an oargrid job id.

    For -R/-J/-G, if used multiple times available resources will be merged in a single resources file. For -o/-j/-J/-g/-G, selected resources will be extracted from the file 'tests/all-g5k.res' describing all grid'5000 resources.

    For a deployment, you need at least one appl, one ctrl file and one resource description.

    -D, –dump <key> dump the corresponding document on stdout, key={generic,res,plan,ctrl}. -F, –dumpfile <file> dump the document into file <file> instead of stdout.

    example : “-D generic -F generic.xml -D ctrl -D plan -F plan.xml” ⇒ ctrl will be dumped to stdout, other docs to their respective file.

    -p, –makeplan stop after the planning step. -n, –dryrun make plan, and only show what would be done if really deploying the application. -x, –run make plan, then really deploy and run the application.

    -p/-n/-x are mutually exclusive, if none is given we stop after specific_to_generic() conversion.

    if only -d is given, Adage switches itself to daemon mode, all input will be received from the standalone adage-client.

    -h, –help print this help. </xterm>

    Daemon mode

    Example use-case :

    <xterm> # will automatically launch adage daemon in a new xterm/screen window.. # otherwise, run adage -d in another shell $ -r my_resources.res -c my_ctrl.xml

    # whoops, don't forget to set the application to deploy $ -a my_appl.test

    # deploy our appl $ -x

    # change something in ctrl params and resources $ -r my_resources.res -c my_ctrl.xml

    # dry-run our appl using these new parameters $ -n </xterm>

    The notion of redeployment in ADAGE is so complex and powerful, hence it deserves its own documentation page.

    Where is my application launched ?

    Base directory

    Your application is first deployed on each nodes, each process is run in a single private directory in a predefined tree structure. You can control if you want the base directory of this tree to be the scratch directory (generally, this would be /tmp) of the node by using destdir=“scratch” attribute for transfer method in control parameters, or you can specify that you want the base directory in an NFS shared directory across the nodes using destdir=“shared”. The directory tree is structured as is :

    example : /tmp/lbreuil/adage-19780-2007-12-17-14:29:16/m1/0/m1_p1/0/

    Where are my binaries ?

    If you use multiple times the same binary with different parameters, you may be interested in binlib_in_commondir=“true” setting, which will tell ADAGE to stage binaries and libraries only once by node in a single directory (respectiveley $base_dir/$uid/adage-$adagepid/bin and $base_dir/$uid/adage-$adagepid/lib), you don't even have to worry about PATH and LD_LIBRARY_PATH because they're automatically live-modified.

    If you let this option set to false, binaries and libraries are staged in the run-dir of the process.

    How can i get resources ?

    Here, two options :

    Describe your available resources

    simply write your own resource file (take a look at tests/nodes.res for an example), and use -r myfile on the commandline. This is the simplest option.

    Interacting with OAR/OARGRID

    if you are on Grid'5000 and you start ADAGE on a frontal node of a cluster (because this feature relies on oarstat, generally installed only on frontal nodes), you can interact with OAR2/OARGRID.

    During compilation and installation of ADAGE, a resource file describing all available resources on the plaform (currently Toulouse, Bordeaux, Sophia, Nancy, Lyon and Rennes) has been generated with the command perl scripts/ and installed in $prefix/share/adage/xml/all-g5k.res. It will be used as a base to generate your own resource file, using the following options :

    • either use -o oarnodefile where oarnodefile is a file containing the hostnames of the nodes you have reserved (you can use $OARNODEFILE if you launch adage from the host where you've landed after oarsub -I). This works only if you specified -t allow_classic_ssh to oarsub.
    • or you can use -j oarjobid, where oarjobid will be the job id of your OAR reservation.
    • finally, you can use -g oargridjobid, where oargridjobid will be the job id of your OARGRID reservation.

    Note that, if don't use OAR2 -t allow_classic_ssh hack to enable direct-connection to the nodes of your reservation, you'll have to use oarsh as submission method in control parameters instead of ssh and oarcp as transfer method instead of scp.

    By the way, using uppercase version of these options you can mix all the resources in a pool of resources, which will result in a single internal resource file at execution time (you can view it with -D res switch). If you use Adage in daemon mode you can even remove a resource from the pool of available resources using -delres/-deloarjobid/-delgridjobid/-deloarnodefile switches.

    Example in daemon mode: <xterm> # initialise the resource pool with a file $ -r my_resources.res

    # append some resources from oar and oargrid $ -R my_second_resources.res $ -J 215310 $ -G 1237

    # check that all our resources are here (the single resources document will be generated) $ -D res

    # my oar reservation has ended, i remove it from the pool $ -deloarjobid 215310 </xterm>

    How can i describe placements constraints ?

    This is done in configuration parameters file, in the section <placement_constraints>. An example :

       <associate id="h2" id_type="exact" match_type="exact" match="node02"/>
       <associate id="h1" id_type="exact" match_type="pattern" match="cluster1"/>
       <unassociate id="h" id_type="pattern" match_type="pattern" match="cluster2"/>
       <unassociate id="h1" id_type="exact" match_type="exact" match="cluster12"/>
       <separate id="h3" without="h2"/>
       <collocate id="h4" with="h1"/>

    Which means :

    • run instances of process group h2 on node node02
    • run instances of process group h1 on nodes having an id starting by cluster1 (this one is useful if you wan't to use a particular set of nodes for a process group)
    • don't run instances of process group having an id starting by h on nodes having an id starting by cluster2
    • don't run instances of process group h1 on node cluster12
    • don't run instances of process group h3 on nodes where instances of process group h2 are already running
    • run instances of process group h4 only on nodes where instances of process group h1 are already running

    Each combination of options is possible : exact match on id, associate/unassociate, pattern match on node ids,… as you can see, it's possible to express complex constraints like run h1 on nodes matching cluster1 but not on the node cluster12, and not on nodes where h3 is already running

    Powered by Heliovista - Création site internet