NXRefine¶
The code is written by Dr. Ray Osborn. More details will be available NXRefine.The code provides various tools for stacking the raw data, finding Bragg peaks, solving the orientation matrix, and finally providing the HKL.
Data Policy¶
Data policy
We only stored the raw data for six months and processed data for a year. Please process the data and copy it to your server. If you need more time, then consult your beamline scientist.
CHESS data processing steps
Steps to use the NXRefine GUI at CHESS data analysis computer¶
-
Step 0 : Create the nxrefine folder with correct calibration file Talk to beamline scientist if you have any questions
-
Step 1 : Link raw data to process data folder
Import and link the data from id4b to id4baux folder
- Step 2 : Check data and manage workflow
Load the data from the GUI
- Step 3 : Find peaks and max peak values
DONOT submit job link, max, find together, run it one by one, otherwise, program will break
- Step 4 : Finding orientation matrix
If you are not able to do that, talk to beamline scientist
- Step 5 : Check HKLI
Check the data
- Step 6 : Check data : HKLI Volume
- Step 7 : Parent file selection for processing different temperatures datasets
* Check the data * Make sure data looks good * Select the file as the parent file
- Step 8 : Check the viewing server log
=================================================================
Running jobs from the personal laptop and submit the jobs to clusters¶
During beamtime nxrefine
Steps for Data Analysis (during beamtime)
Step 1: Open your terminal:
ssh <chess_id>@lnx201.classe.cornell.edu
Step 2: In the beamline data analysis GUI nxrefine
Import the data: https://www.youtube.com/watch?v=IAX8-wOgImc
Modify the scripts¶
-
Open terminal 1
cd /nfs/chess/id4baux/<cycle_number>/<proposal_id>/scripts -
Copy and rename the script file name
cp qsub_suchi_FeNiCo_300.sh qsub_<username>_<sample_name>_<temp>.sh -
Open another terminal 2: Go to the desired folder where you load the data from id4b to id4baux nxrefine
cd /nfs/chess/id4baux/2026-1/sarker-0000-a/nxrefine/<sample_name>/<sample_id> -
Go back to terminal 1 : change the directory and temperature of the folder
Step 1: nano qsub_<username>_<sample_name>_<temp>.sh Step 2: USER_DIR='/nfs/chess/id4baux/2026-1/sarker-0000-a/nxrefine/FeNiCo/S1/' # control+ E (go to the end of line), copy the sample directory from terminal 2 Step 3: Check below for Option 1 (no parent file) or Option 2 (parent file) details Step 4: for TEMP in 20; # provide the correct temperature Step 5: control + O #save the file in nano Step 6: press return Step 7: control + X #exit editing Step 8: cat qsub_<username>_<sample_name>_<temp>.sh #check the file again -
Change the permission again in beamline data analysis computer (id4baux- permission terminal) and your own computer
chmod -R 777 /nfs/chess/id4baux/<cycle_number>/<proposal_id> -
Submit the job from your own terminal
qsub -q all.q -l mem_free=200G -pe sge_pe 16 /nfs/chess/id4baux/2025-1/<proposal_id>/qsub_116.sh -
Check the status
qstat
Option 1 : Parent file NOT available¶
-
If the parent file is not avilable when you submit the jobs, run only below commands and Comment out other commands
nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --load --overwrite nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --link --max nxfind --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 -t 10000 --overwrite -
Comment out other commands when parent file is not avilable
# nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --link --copy --max # nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --copy # nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --refine # nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --transform --combine --regular # nxreduce --directory $USER_DIR$TEMP --pdf --regular -
After the parent file is ready, make sure all the previous jobs (load, link, max, find) are completed for one particular temperature, modify the same file and run below commands and comment out below commands
nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --copy nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --refine nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --transform --combine --regular -
Comment out other commands as you already ran that
# nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --load --overwrite # nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --link --max # nxfind --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 -t 10000 --overwrite
Option 2: Parent file available¶
If the parent file is avilable when you submit the jobs, run only below commands and Comment out other commands
nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --load --overwrite
nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --link --copy --max
nxfind --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 -t 10000 --overwrite
nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --refine
nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --transform --combine --regular
- Comment out other commands (those are for option 1 only)
# nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --link --max # nxreduce --directory ${USER_DIR}${TEMP} --entries f1 f2 f3 --copy