Administrative Updates to the VRE
Overview
N.B. This document is derived from a Wiki page and was written with a specific installation of the VRE software in mind. It should be re-written to take into consideration the needs of someone installing the system at their own institution. It also assumes a working knowledge of Memfem.
Instances of the text <web server> should be replaced with the actual hostname of the web server used to host the VRE. Instances of the text <local cluster> should be replaced with the actual hostname of the PBS-compliant local cluster used for job submission.
Dependencies
Sun Java 5 (on web server and cluster)
Postgresql 8.1 (on web server)
Apache Tomcat 5.5 (on web server)
Apache ant (on web server)
Python (on web server and cluster)
VTK (on web server and cluster)
Xvfb (Virtual Frame Buffer) (on web server and cluster)
An account for ibuser under (/home/ibuser) (on web server and cluster)
PostgreSQL bin directory added to PATH (on web server)
Apache Ant bin directory added to PATH (on web server)
JAVA_HOME (on web server)
CATALINA_HOME (on web server)
Installing Tomcat and PostgreSQL
Install tomcat-5.5
to do this, I needed to follow the instructions here to upgrade java to 1.5: http://www.gentoo.org/proj/en/java/java-upgrade.xml If installing the VRE on Fedora Core, you may want to follow: http://www.jpackage.org/installation.php
Then, I had to install tomcat-5.5. This required lots of Sun packages that I had to fetch manually. You can find these packages in /usr/portage/distfiles From the tomcat/src directory, run ant You may have problems getting a required Eclipse .zip file, which can be circumvented by editing the build.properties.default file, by providing a new webaddress to get the file from. Replace line 141 in build.properties.default with: jdt.loc=http://archive.eclipse.org/eclipse/downloads/drops/R-3.1.2-200601181600/eclipse-JDT-3.1.2.zip
Install postgres-8.1.*. For Fedora Core 6, the process used was as follows:
./configure
gmake
su
gmake install
adduser postgres
mkdir /usr/local/pgsql/data
chown postgres /usr/local/pgsql/data
su - postgres
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
/usr/local/pgsql/bin/postmaster -D /usr/local/pgsql/data >logfile 2>&1 &
/usr/local/pgsql/bin/createdb test
/usr/local/pgsql/bin/psql test
363 cp tomcat.policy /etc/tomcat-5.5/
375 cp ~rblake/postgresql.jar /usr/share/tomcat-5.5/common/lib/
added this file: /usr/share/tomcat-5.5/common/lib/naming-factory-dbcp.jar
Installing the VRE Application
The VRE application consists of a PostgreSQL database, a J2EE web application, and a standalone Java executor application. The web application and database are hosted on <web server>, the standalone java executor application is installed on each cluster the VRE can talk to, currently <web server> and <local cluster>.
Checking out the Source Code
The following assumes you want to place the source code in ~/src. Do the following if checking out for the first time:
mkdir ~/src
cd ~/src
svn co https://ibvre.svn.sourceforge.net/svnroot/ibvre ibvre
Otherwise just update SVN with:
svn update ~/src/ibvre
Update src/uk/ac/integrativebiology/vre/config/vreconfig.xml and sql/populateReferenceData.sql to reflect local environment.
CAUTION this will wipe all data held in the VRE database on <web server>.
ssh <web server>
cd ~src/ibvre/sql
psql -h localhost -p 5432 -U postgres
create database vre;
\c vre
\i createVREDatabase.sql
\i populateReferenceData.sql
\q
Viewing the Database
To browse the VRE database use pgAdmin3. To connect to the database, add the server by clicking on the plug button on the toolbar and enter the following details into the server dialog box:
Address: <web server>
Port: 5432
Leave everything else unchanged.
If the firewall prevents outside machines accessing port 5432, and you want to run pgAdmin3 locally, it is possible to get round this through SSH tunnelling. First enter the following command on your local machine:
ssh -L 5433:localhost:5432 ibuser@<web server>
Then just run pgAdmin3 locally connecting to localhost port 5433.
As an example of what can be done with this tool, click on the 'vre' database under 'databases', click the SQL button on the toolbar, enter the following command and click the play button:
select * from fil_file f
join jof_job_output_file j on j.jof_fil_id = f.fil_id
join job_job j2 on j2.job_id = j.jof_job_id
join clu_cluster c on c.clu_id = j2.job_clu_id
join exp_experiment e on e.exp_id = j2.job_exp_id
where fil_name like 'restart%';
This will display all of the restart files held in the database against jobs and experiments.
Installing/Updating the Web Application
This manual step is required before building the VRE to copy .JAR files to the webapps directory, if the destination dir does not exist, create it:
cd ~/src/ibvre/lib
for i in `find . -name "*.jar"`; do cp $i ~/src/ibvre/webapps/vre/WEB-INF/lib/; done
Build the VRE application by changing to the source root directory, and running Apache ant as follows:
cd ~/src/ibvre
ant build-all
Now update the web application. This assumes you have built the application on <web server>, if you built it elsewhere you just need to scp it across:
cp ~/src/ibvre/build/dist/VRE.war /var/lib/tomcat-5.5/webapps
It will take a few seconds for Tomcat to refresh itself. You can tell when this has happened by examining the modification date on the /var/lib/tomcat-5-5/webapps/VRE directory. It should be the current datetime +/- a few seconds.
Now deploy the Job Update web service:
ant deploy-jobupdateservice
Verify the JobUpdateServices has been correctly deployed by visiting http://<web server>:8080/VRE/services and looking for the JobUpdateService in the list.
Installing the Memfem Executor
If you are building the Memfem executor for a real Torque queuing system, you need to ensure spoofqsub=false in VRE\src\uk\ac\integrativebiology\vre\executor\memfem\remote\executor.properties:
#Are we spoofing qsub and qstat with sh and cat determines if -f added to qstat command or not
spoofqsub=false
Decide on the number of nodes you want the Remote Executor to use and update the numberofnodes property in executor.properties. For example,
numberofnodes=2
If you are installing the remote executor for the first time on the cluster follow these steps:
cd ~/src/ibvre
ant build-remote-memfem
cd ~/src/ibvre/build/dist
unzip RemoteMemfemInstall.zip -d RemoteMemfemInstall
cp RemoteMemfemInstall/* /home/ibuser/clusterhome/EXECUTOR/
Otherwise, just copy the JAR file across with the following steps:
cd ~/src/ibvre
ant build-remote-memfem
cd ~/src/ibvre/build/dist
cp RemoteMemfem.jar /home/ibuser/clusterhome/EXECUTOR/
Adding an Ionic Model
Make a note of the Memfem option value for this ionic model. For example, 'aLRd'. In the following, this will be denoted <luv_value>.
Choose an all-uppercase short name for the model prefixed with IM_. For example, 'IM_ALRD'. In the following, this will be denoted <luv_fixed_name>. This is the name the VRE application will use internally to refer to this ionic model.
Choose a longer description for the model. For example, 'Adapted Luo-Rudy Dynamic'. This is a human-readable name giving the user more information about this ionic model. In the following, this will be denoted <luv_description>.
Open ~/src/ibvre/src/uk/ac/integrativebiology/vre/executor/memfem/memfemlookups.xml in an XML/text editor and add an <ionicModel> XML element as the last item within <ionicModels> to reference this ionic model. This is used by the Memfem executor class within the VRE when jobs are submitted and input files created. Replace the values in <>s with their appropriate values:
<ionicModels>
...
<ionicModel fixedName="<luv_fixed_name>" paramValue="<luv_value>"/>
</ionicModels>
For example,
<ionicModels>
...
<ionicModel fixedName="IM_ALRD" paramValue="aLRd"/>
</ionicModels>
Log into the database by first sshing into <web server> and then logging into the PostgreSQL database:
ssh <web server>
psql -h localhost -p 5432 -d vre -U postgres
Calculate the next available primary key by running the following command through the psql command line:
select max(luv_id)+1 from LUV_LOOKUP_VALUE;
Make a note of this number. In the following, this will be denoted <luv_id>.
Enter the following command to add a new row to the LUV_LOOKUP_VALUE table, which contains all static lookup data used by the VRE application:
insert into LUV_LOOKUP_VALUE(luv_id, luv_lut_id, luv_fixed_name, luv_description, luv_value) values (<luv_id>,5,<luv_fixed_name>,<luv_description>,<luv_value>);
For example,
insert into LUV_LOOKUP_VALUE(luv_id, luv_lut_id, luv_fixed_name, luv_description, luv_value) values (42,5,'IM_ALRD','Adapted Luo-Rudy Dynamic','aLRd');
Copy this insert statement into ~/src/ibvre/sql/populateReferenceData.sql in the appropriate place.
Rebuild the web applications by following the instructions above.
Test
Commit your changes to SVN with:
cd ~src/ibvre
svn commit sql
svn commit src
Adding a New Geometry Model
Make a note of the Memfem option value passed to the -n flag. In the following, this will be denoted <model_param_value>. When run, Memfem will search for files of the form <model_param_value>.*.
Choose a unique directory name (without spaces) for the model. In the following, this will be denoted <model_dir_name>. For example, 'canine'.
Create a new directory <model_dir_name> under /home/ibuser/clusterhome/MEMFEM on each cluster.
Copy model files to /home/ibuser/clusterhome/MEMFEM/<model_dir_name> on each cluster. For example:
clusterhome/MEMFEM/canine/canine.tetras
clusterhome/MEMFEM/canine/canine.bcs
clusterhome/MEMFEM/canine/canine.cg_in
clusterhome/MEMFEM/canine/canine.cond
clusterhome/MEMFEM/canine/canine.fibers
clusterhome/MEMFEM/canine/canine.mem.cond
clusterhome/MEMFEM/canine/canine.pace.set
clusterhome/MEMFEM/canine/canine.pts
clusterhome/MEMFEM/canine/canine.spec
clusterhome/MEMFEM/canine/canine.tetras
clusterhome/MEMFEM/canine/canine.right_plate.set
clusterhome/MEMFEM/canine/canine.optical_points.set
clusterhome/MEMFEM/canine/canine.left_plate.set
Choose an all-uppercase short name for the model prefixed with GM_. For example, 'GM_CANINE'. In the following, this will be denoted <luv_fixed_name>. This is the name the VRE application will use internally to refer to this geometry model.
Choose a longer description for the model. For example, 'Canine Model A from UCSD'. This is a human-readable name giving the user more information about this geometry model. In the following, this will be denoted <luv_description>.
Open ~/src/ibvre/src/uk/ac/integrativebiology/vre/executor/memfem/memfemlookups.xml in an XML/text editor and add an <geometryModel> XML element as the last item within <geometryModels> to reference this geometry model. This is used by the Memfem executor class within the VRE when jobs are submitted and input files created. Replace the values in <>s with their appropriate values:
<geometryModels>
...
<geometryModel fixedName="<luv_fixed_name>" paramValue="<model_param_value>" commonDirName="<model_dir_name>"/>
</geometryModels>
For example,
<geometryModels>
...
<geometryModel fixedName="GM_CANINE" paramValue="canine" commonDirName="canine"/>
</geometryModels>
Log into the database by first sshing into <web server> and then logging into the PostgreSQL database:
ssh ibuser@<web server>
psql -h localhost -p 5432 -d vre -U postgres
Execute the following SQL statement and make a note of the number returned. This is the value of the next available primary key. In the following, this will be denoted <luv_id>.
select max(luv_id)+1 from LUV_LOOKUP_VALUE;
Enter the following command to add a new row to the LUV_LOOKUP_VALUE table, which contains all static lookup data used by the VRE application:
insert into LUV_LOOKUP_VALUE(luv_id, luv_lut_id, luv_fixed_name, luv_description, luv_value) values (<luv_id>,4,<luv_fixed_name>,<luv_description>,<model_dir_name>);
For example,
insert into LUV_LOOKUP_VALUE(luv_id, luv_lut_id, luv_fixed_name, luv_description, luv_value) values (99,4,'GM_CANINE','Canine Model A from UCSD','canine');
Copy this insert statement into ~/src/ibvre/sql/populateReferenceData.sql in the appropriate place.
Update web application by following the instructions above.
Test
Commit your changes to SVN with:
cd ~src/ibvre
svn commit sql
svn commit src
Adding a New Cluster
The VRE is currently designed only to work with clusters supporting the PBS queuing system, and has only been tested for the Torque implementation http://www.clusterresources.com/wiki/doku.php?id=torque:torque_wiki. More than one cluster supporting this system can be used at the same time. If the need arises to introduce a cluster with a different queuing system, further work is required to enable the RemoteExecutor and ClusterProxy classes to be parameterised based on the Cluster_ID chosen for the job.
The VRE (as of Feb 2007) has built in support for submission to <web server> and <local cluster>. Submission to <web server> is to a spoofed qsub/qstat system written by UmarFarooq. This requires that symlinks to sh and cat are present in /home/ibuser/clusterhome/EXECUTOR, as well as a couple of spoofing scripts. RemoteExecutor has to be built with spoofqstat=true in executor.properties.
Create a login for ibuser on the cluster.
Install Sun Java 1.5 on the cluster head node (needed for remote executor).
Install Python VTK on cluster head node (needed for visualisation).
Install Xvfb (virtual frame buffer) on the cluster and start as the ibuser:
As ibuser, run :
Xvfb :2 &
Add the following to /home/ibuser/.bashrc:
export DISPLAY=<local cluster>:2.0
Set up ssh keys to allow the tomcat user on <web server> to login to <local cluster> without a password, following procedure on SshKeys. If /home/ibuser/.ssh/authorized_keys already exists, append the /home/tomcat/.ssh/id_rsa_pub on <web server> to /home/ibuser/.ssh/authorized_keys on the cluster.
cat id_rsa_pub >> authorized_keys
Manually log into the cluster from tomcat@<web server> to add the cluster host key to /home/tomcat/.ssh/known_hosts on <web server>.
Create the clusterhome directory tree:
/home/ibuser/clusterhome
/home/ibuser/clusterhome/EXECUTOR
/home/ibuser/clusterhome/MEMFEM
/home/ibuser/clusterhome/MEMFEM/EXE
/home/ibuser/clusterhome/MEMFEM/EXE/4.2
/home/ibuser/clusterhome/MEMFEM/rabbit
Build the remote executor and unzip to /home/ibuser/clusterhome/EXECUTOR/ (see instructions on building the remote executor above).
Copy ~/src/ibvre/serverscripts/seperate_vis/snapshot.py to /home/ibuser/clusterhome/EXECUTOR/ on the cluster.
Copy over each geometry model directory to /home/ibuser/clusterhome/MEMFEM/<model> on the cluster. Note the point50.* files have been renamed rabbit.* for the benefit of the VRE application.
If installing onto a real PBS system,
Install ~/src/serverscripts/executememfem.sh script to /home/ibuser/clusterhome/EXECUTOR
Otherwise,
Copy ~/src/serverscripts/executememfem<WEB SERVER>.sh to /home/ibuser/clusterhome/EXECUTOR/executememfem.sh on the cluster, and ~/src/serverscripts/executememfemSPOOF.sh to /home/ibuser/clusterhome/EXECUTOR/executememfemSPOOF.sh
Create symlinks to spoof the real qsub/qstat:
ln -s /bin/sh /home/ibuser/clusterhome/EXECUTOR/qsub
ln -s /usr/bin/cat /home/ibuser/clusterhome/EXECUTOR/qstat
Install each Memfem binary to /home/ibuser/clusterhome/MEMFEM/EXE/<version>/ . For example,
/home/ibuser/clusterhome/MEMFEM/EXE/4.2/memfemexe
Update /home/ibuser/.bashrc on the cluster to add paths to Java and Python:
PYTHONPATH=:/usr/local/lib:/home/ibuser/VTK/Wrapping/Python:/usr/local/lib/python2.3/site-packages/vtk/
LD_LIBRARY_PATH=/usr/local/lib/
PATH=$PATH:/usr/java/jre1.5.0_11/bin
export PATH PYTHONPATH LD_LIBRARY_PATH
Added the cluster to the CLU_CLUSTER table in the VRE database on <web server>. For example,
insert into CLU_CLUSTER (clu_id, clu_fixed_name, clu_display_name,clu_username,clu_host,clu_home_dir) VALUES(3, 'LOCALCLUSTER','LocalCluster','ibuser','<local cluster>','/home/ibuser/clusterhome');
Rebuild and install the web application.
Adding a New Parameter to the VRE
TODO
Add rows to LUV_LOOKUP_VALUE.
Update Memfem executor?
Update web application?
Manually Deleting Jobs
delete from jof_job_output_file where jof_job_id=<job_id>;
delete from job_job where job_id=<job_id>;
Basic HTTP Authentication for the VRE webapp
This will give minimal protection to the VRE webapp.
Add the following to the top of web.xml:
<!--
Define the a "Security Constraint" on the VRE webapp. This will exclude
the web services.
-->
<security-constraint>
<web-resource-collection>
<web-resource-name>vre</web-resource-name>
<url-pattern>*.do</url-pattern>
</web-resource-collection>
<auth-constraint>
<role-name>vre</role-name>
</auth-constraint>
</security-constraint>
<!-- Define the Login Configuration for this Application -->
<login-config>
<auth-method>BASIC</auth-method>
<realm-name>vre</realm-name>
</login-config>
Add a vre user to /var/lib/tomcat-5.5/conf/tomcat-users.xml:
<role rolename="vre"/>
<user username="vre" password="memfem" roles="vre"/>