Anchor | ||||
---|---|---|---|---|
|
1. Purpose
This document is meant for the administrators of the BioMS application. This document describes the process of deploying the BioMS application and regular administration activities to be performed on the BioMS application.
...
# Define list of workers that will be used
# for mapping requests
worker.list=loadbalancer,status
# Define Bioms-node1
# modify the host as your host IP or DNS name.
worker.bioms-node1.port=8009
worker.bioms-node1.host=<bioms-node1-host>
worker.bioms-node1.type=ajp13
worker.bioms-node1.lbfactor=1
worker.bioms-node1.cachesize=10
# Define Bioms-node2
# modify the host as your host IP or DNS name.
worker.bioms-node2.port=8009
worker.bioms-node2.host=<bioms-node2-host>
worker.bioms-node2.type=ajp13
worker.bioms-node2.lbfactor=1
worker.bioms-node2.cachesize=10
# Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=bioms-node1,bioms-node2
worker.loadbalancer.sticky_session=1
# Status worker for managing load balancer
worker.status.type=status
Configure the worker uri map in Apache load balancer.Add the following uriworkermap.properties file in apache config
/bioms=loadbalancer
/bioms/*=loadbalancer
...
To configure nightly mayo auth data sync cron job run 'crontab -e' command and and the following to the end of the file and save
Code Block |
---|
...
0 22 * * * $HOME/.bioms/mayo-auth-sync/sync-mayo-auth-data.sh > $HOME/.bioms/mayo-auth-sync/log/mayo-auth-sync.log.`date +\%y\%m\%d-\%H\%M\%S` 2>&1 |
This will run the Mayo auth data sync every night at 10PM and output of the run would be logged to a file of the $HOME/.bioms/mayo-auth-sync/log/mayo-auth-sync.log.YYMMDD-HHmmSS
To test the sync job and trigger an immediate sync of the auth data run
Code Block |
---|
$HOME/.bioms/mayo-auth-sync/sync-mayo-auth-data.sh > $HOME/.bioms/mayo-auth-sync/log/mayo-auth-sync.log.`date +\%y\%m\%d-\%H\%M\%S` 2>&1 |
Once the execution is complete check the log file at $HOME/.bioms/mayo-auth-sync/log/mayo-auth-sync.log.nnnnnnnn and make sure there are no errors in the log.
...
Here is the SQL for the inserting the repository data into the DB
Code Block |
---|
Insert into BMS_REMOTE_REPOSITORY (ID,NAME,JMS_QUEUE_NAME,AUTHKEY,STATUS,LAST_HEART_BEAT_TIME,CONTACT) values ((select max(id)+1 from BMS_REMOTE_REPOSITORY),'<repo-name>','<repo-name>','<authkey>',1,SYSDATE,'<contact_emails>'); |
Securely share the <repo-name> and <authkey> and the bioms application url (the loadbalancer url) with the repository caTissue admin. They would need to update the bioms-adaptor.properties with these details.
Once the bioms-adaptor is setup properly at the repository caTissue and started you should see message like
...
- Request the caTissue Admin to create Repository Site along with the coordinator user in the catissue and note the identifier of the Repository Site.
- Get the details of the Repository Site and the Coordinator user Name from the repository caTissue admin.
- Create a Site of type Repository in BioMS via the 'caTissue2.0A on BioMS' caTissue instance and select the coordinator user created by the caTissue admin as the coordinator (this user would have been synced to BioMS automatically). Note the id of the site as <bms_repo_site_id> just created from caTissue2.0A on BioMS.
Link the repository site created above with the caTissue remote repository created in Step 13.1. For this insert a new row into the BMS_REMOTE_REPOSITORY_SITE table as shown below.
REMOTE_REPOSITORY_ID
SITE_ID
<id of the remote repo entry for the remote repo>
<bms_repo_site_id>
Here is the SQL command for this insert:
from BMSCode Block Insert into BMS_REMOTE_REPOSITORY_SITE (REMOTE_REPOSITORY_ID,SITE_ID) values ((select id
from BMS_REMOTE_REPOSITORY where name=<repo-name>),<bms_repo_site_id>);
(To be performed on the caTissue repository) Map the new Site created in BioMS with the corresponding repository Site created in caTissue per step 1. For this request the repository caTissue admin to insert the following data into BMS_CATISSUE_ENTITY table in the repository caTissue data base
ID
ENTITY_TYPE
BMS_ID
CATISSUE_ID
BMS_CATISSUE_ENTITY_MAP_SEQ.nextval
edu.wustl.catissuecore.domain.Site
<bms_repo_site_id>
<id_of_repo_site_in_catissue>
Here is the SQL insert command for the same.
Code Block Insert into BMS_CATISSUE_ENTITY_MAP (ID,ENTITY_TYPE,BMS_ID,CATISSUE_ID) values (BMS_CATISSUE_ENTITY_MAP_SEQ.NEXTVAL,'edu.wustl.catissuecore.domain.Site',<bms_repo_site_id>,<catissue_repo_site_id>);
We should now be able to create studies with the new repository as the ship to site for specimen and sync the studies. When the study is synced study should show up in the caTissue.
13.3. Building and rolling out new Study
...
BioMS admin should review this list of error at least once in a day to make sure sync functions are working properly.
13.5. CALGB Incremental Registration load
The CALGB incremental registration load is configured in alliance bioMs (dev) server.
The load script /usr/local/bms/.bioms/calgb-reg-sync/load-incr-calgb-regdumps.sh is configured to run at 15 minutes on the hours 7AM- 8PM EST Monday-Friday.
The script downloads the incremental registrations dump files from ftp.mayo.edu and attempts to load them into bioms QA deployment.
The logs from each execution of the incremental load goes into the folder alliancebms@/usr/local/bms/.bioms/calgb-reg-sync/log