Useful Ganga Resources

Instructions on Using Ganga at Birmingham

Running Ganga Locally (eprexA, etc.)

The latest version (5.1.1) has been installed and can be run using the following command:


Job submission to both local (PBS) and the Grid (LCG) backends is available. Typical submission scripts are available at:


The locally submitted jobs will save output files in your user area which you can then merge. The LCG jobs will create a new DQ2 dataset that you can then pull locally using 'dq2-get' in the normal way. Unfortunately, you can't merge those as yet (I'm working on it!) within Ganga, but you can use TChain and hadd outside Ganga in the normal way.

It is also possible to submit to Bluebear from eprexA using the Remote backend. However, it's worth mentioning that you have to give the parameters (inputdata, outputdata.location) as if running from bluebear (not surpisingly). Also, at present it will take a bit of time to submit lots of split jobs as it will fire up ganga on bluebear for each job. Again, I'm working on this, but unless I get lots of e-mails, it will stay fairly low on my priority list I'm afraid! Anyway, I've provided an example script here:


If there is a call to provide a similar example for Lxplus submission from your desktop, I'll write one!

Running Ganga on Bluebear

Though bluebear is now running SL5, SL4 is available (and recommended for ATLAS users) by using the following:


and then you can go about setting up Athena as before (though it's worth mentioning that the login nodes of Bluebear have been rather slow recently...). Ganga is then available from:


Note that there is an odd problem with setting up athena using the 'setup' tag as it doesn't seem to set the python paths correctly. This leads to Ganga not having the 'Readline' module and not allowing you to access previous commands. I'd recommend removing the tag unless you have a particular reason to have it!

As an example to submit to the bluebear system, there is a script at:


Dataset Replication Policy

If a dataset is not in the destination cloud (e.g. is only at CERN and you want it in the UK), it will go ahead automatically. If it is in the cloud
(e.g. RAL -> UK) the request will go to the distributed analysis shifters for approval and unless the source site is broken, you probably won't get it.
If the request is < 10Gb it will also get automatic approval.

Requests can be made through the web interface:

though if you have not used it before you will need to register (Chris Curtis has done this so it's definitely possible!). Most of the fields are
obvious, but the destination field is the site to copy to as given by the TiersOfAtlas? name:

e.g. UKI-SOUTHGRID-BHAM-HEP_MCDISK (note the space token 'MCDISK' on the end - NOT just BHAM!)

If you copy files to Birmingham and run using Ganga, you can specify our site to run on and you should have a much better success rate (though to be
fair, things seem to have got better now!). These jobs will run on the grid nodes in birmingham and so if things go wrong, we have better control over

I will need to do some tinkering to get jobs running on the PBS on Blubebear and accessing data from the bham SE, but this should give you a fall back
option if the grid is having a bad day!


OxfordGangaTutorial EdinburghGangaTutorial

-- MarkSlater - 24 Jan 2009

Topic revision: r4 - 24 Jan 2009 - 11:40:08 - MarkSlater
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback