Running a bemovi analysis
Running a bemovi analysis
Loading the package and getting help
To start the processing and analysis of videos by bemovi, we first need to load the package:
Some messages show that the package dependencies, i.e. data.table, CircStats and circular were correctly loaded. When installing the package from source, make sure the dependencies are available in your local R installation.
The following commands show you which functions are part of the bemovi package. Additional information can be obtained by preceding a specific function by a question mark, which will explain what the function does and what the arguments are that need to be provided:
# package manual and overview of functions in bemovi package help(package=bemovi) # get specific help about bemovi function: ?locate_and_measure_particles
Preparing the basic directory structure
Bemovi was designed to work on collections of video files, each of which could be a video of a particular sample at a particular time. A collection of videos can be termed a bemovi “project”. Each bemovi project requires you to create a directory (i.e., folder) that will hold all the files and other directories concerning that project. In the R code below, this directory is named “bemovi demo data”.
Two directories need to be created within this project folder: first, a folder (below this is named “0 - video description”) containing a tab-delimited text file with a description of the videos. Importantly, this file needs a column called “file”, which lists the names of the videos without their ending (e.g. Data0001 for the Data0001.avi file) and associated information can follow in separate columns (e.g. date, time, treatment levels; importantly, the video description file should not contain a variable called “id”, which will cause conflicts with an automatically created ID variable). We observed problems if the description file does not contain a “line return” at the end. Second, a folder containing the videos to be analysed (below this is named “1 - raw”) has to be provided.
# project directory (you create this one yourself) to.data <- "/Users/Frank/Dropbox/Bemovi/" # you also have to provide two folders: # the first contains the video description # the second holds the raw videos video.description.folder <- "0 - video description/" video.description.file <- "video.description.txt" raw.video.folder <- "1 - raw/"
As you run the bemovi functions below, you will see that new folders and files are automatically created within the project folder. These hold the results of the different analysis steps and also the ImageJ macros to process the data. You can choose the names of these folders. Please avoid using the tilde symbol when specifying paths, as this causes problems when ImageJ is called.
# the following intermediate directories are automatically created particle.data.folder <- "2 - particle data/" trajectory.data.folder <- "3 - trajectory data/" temp.overlay.folder <- "4a - temp overlays/" overlay.folder <- "4 - overlays/" merged.data.folder <- "5 - merged data/" ijmacs.folder <- "ijmacs/"
To check that the video file names conform to the naming conventions of bemovi, run:
The check_video_file_names() function takes two arguments, which together specify the path to the raw videos. If a message says “Video file types ok.”, you can proceed to the next step, otherwise the function returns the offending filename(s), which you should correct (e.g., filenames should not contain periods except for the one separating file name and ending).
Before beginning the analyses of your videos, you must tell R where the ImageJ and the ParticleLinker applications are. Importantly, under Windows, you also need to provide the path to the Java executable (see example below). For example:
#UNIX example (adapt paths to your installation and avoid using the tilde symbol when specifying the paths) IJ.path <- "/Applications/ImageJ/ImageJ64.app/Contents/Resources/Java/" to.particlelinker <- "/Users/Frank/Dropbox/Storage/BEMOVI ms/Software/" # Windows example (adapt the paths to your installation!) #IJ.path <- "C:/Program Files/FIJI.app/ImageJ-win64.exe" #to.particlelinker <-"C:/Users/Frank/Dropbox/bemovi demo data/Software/" #java.path <- "C:/Windows/System32/javaw.exe"
Next, some information should be provided about the overall availability of memory (in MB) for ImageJ and the ParticleLinker. We have found 10GB to be sufficient for at least several hundred particles in a video frame.
# specify the amount of memory available to ImageJ and Java (in Mb) memory.alloc <- c(10000)
The first step you may want to do is converting the pixel and frame rate based values of the videos into their real dimensions. To do so, you need to specify the frame rate at which the video was taken in frames per second (fps).
# video frame rate (in frames per second) fps <- 25 # size of a pixel in micrometer pixel_to_scale <- 1000/240
The segmentation of the videos is based on the “difference image segmentation” using a constant offset of frames, and a user-specified threshold. For example, an offset of 10 means that frame 1 will be subtracted from frame 11, 2 from 12, and so on. The threshold then specifies which the pixels are considered as foreground when the image is binarized. If the difference.lag is set to zero, no difference image is created and the threshold is directly applied. Please refer to Pennekamp & Schtickzelle (2013) for further information. Some trial-and-error is required to find appropriate segmentation values on new videos.
# use difference image segmentation difference.lag <- 10 # uncomment the next line to use only threshold based segmentation # difference.lag <- 0 # specify threshold thresholds = c(20,255)
The offset should be adjusted according to the frame rate and the speed of the moving individuals. If individuals move very slowly or the frame rate of the video is very high, a larger offset is required.
The first threshold value (10 in the example above) should be low if the contrast between background and individuals is low, naturally increasing the segmentation of background, whereas a clear contrast between background and individuals can allow a higher threshold, which more effectively will eliminate unwanted background detected as foreground. The second threshold value (255 in the example above) should not be changed.
The check_threshold_values() function assists in finding good threshold settings by loading a selected video into ImageJ, performing the difference image procedure with the defined offset and then opening the threshold settings. The user can now interact by changing the threshold and directly observing its effect on the foreground detection (shown in red).
We have to specify the path to the raw data, the directory where the ImageJ macro is saved, the video selected for processing (vid_select = 0 corresponds to the first video in the directory), the thresholds, the path to ImageJ and finally the memory available to ImageJ. Be patient when you run this function: it will open ImageJ and then you may have to wait a little while to load the differenced video. When loaded, you can select the threshold values by using the threshold box.
check_threshold_values(to.data, raw.video.folder, ijmacs.folder, 0, difference.lag, thresholds, IJ.path, memory.alloc)
The screen shot shows the open video in ImageJ and the menu to adjust the threshold.