<!DOCTYPE html>

Running a bemovi analysis
## Warning: package 'knitr' was built under
## R version 3.3.2

Loading the package and getting help

To start the processing and analysis of videos by bemovi, we first need to load the package:

library(bemovi)

Some messages show that the package dependencies, i.e. data.table, CircStats and circular were correctly loaded. When installing the package from source, make sure the dependencies are available in your local R installation.

The following commands show you which functions are part of the bemovi package. Additional information can be obtained by preceding a specific function by a question mark, which will explain what the function does and what the arguments are that need to be provided:

# package manual and overview of functions in bemovi package
help(package=bemovi) 

# get specific help about bemovi function:
?locate_and_measure_particles

Preparing the basic directory structure

Bemovi was designed to work on collections of video files, each of which could be a video of a particular sample at a particular time. A collection of videos can be termed a bemovi “project”. Each bemovi project requires you to create a directory (i.e., folder) that will hold all the files and other directories concerning that project. In the R code below, this directory is named “bemovi demo data”.

Two directories need to be created within this project folder: first, a folder (below this is named “0 - video description”) containing a tab-delimited text file with a description of the videos. Importantly, this file needs a column called “file”, which lists the names of the videos without their ending (e.g. Data0001 for the Data0001.avi file) and associated information can follow in separate columns (e.g. date, time, treatment levels; importantly, the video description file should not contain a variable called “id”, which will cause conflicts with an automatically created ID variable). We observed problems if the description file does not contain a “line return” at the end. Second, a folder containing the videos to be analysed (below this is named “1 - raw”) has to be provided.

# project directory (you create this one yourself)
to.data <- "/Users/Frank/Bemovi demo/"

# you also have to provide two folders:
# the first contains the video description
# the second holds the raw videos
video.description.folder <- "0 - video description/"
video.description.file <- "video.description.txt"
raw.video.folder <- "1 - raw/"

As you run the bemovi functions below, you will see that new folders and files are automatically created within the project folder. These hold the results of the different analysis steps and also the ImageJ macros to process the data. You can choose the names of these folders. Please avoid using the tilde symbol when specifying paths, as this causes problems when ImageJ is called.

# the following intermediate directories are automatically created
particle.data.folder <- "2 - particle data/"
trajectory.data.folder <- "3 - trajectory data/"
temp.overlay.folder <- "4a - temp overlays/"
overlay.folder <- "4 - overlays/"
merged.data.folder <- "5 - merged data/"
ijmacs.folder <- "ijmacs/"
The initial directory containing the directory with the raw data and a directory, where information about the experimental design is placed in a text file.

The initial directory containing the directory with the raw data and a directory, where information about the experimental design is placed in a text file.

To check that the video file names conform to the naming conventions of bemovi, run:

check_video_file_names(to.data, raw.video.folder, video.description.folder, video.description.file)

The check_video_file_names() function takes four arguments, to specify the path to the raw videos and the video description file. If a message says “Video file types ok.”, you can proceed to the next step, otherwise the function returns the offending filename(s), which you should correct (e.g., filenames should neither contain hyphens nor periods, except for the one separating file name and ending).

Preliminaries

Before beginning the analyses of your videos, you must tell R where the ImageJ and the ParticleLinker applications are. Importantly, under Windows, you also need to provide the path to the Java executable (see example below). For example:

#UNIX example (adapt paths to your installation and avoid using the tilde symbol when specifying the paths)
IJ.path <- "/Applications/ImageJ5/ImageJ.app/Contents/Resources/Java/"
to.particlelinker <- "/Users/Frank/Dropbox/Storage/BEMOVI ms/Software/"

# Windows example (adapt the paths to your installation!)
#IJ.path <- "C:/Program Files/FIJI.app/ImageJ-win64.exe"
#to.particlelinker <-"C:/Users/Frank/Dropbox/bemovi demo data/Software/"
#java.path <- "C:/Windows/System32/javaw.exe"

Next, some information should be provided about the overall availability of memory (in MB) for ImageJ and the ParticleLinker. We have found 10GB to be sufficient for at least several hundred particles in a video frame.

# specify the amount of memory available to ImageJ and Java (in Mb)
memory.alloc <- c(10000)

Setup

The first step you may want to do is converting the pixel and frame rate based values of the videos into their real dimensions. To do so, you need to specify the frame rate at which the video was taken in frames per second (fps).

# video frame rate (in frames per second)
fps <- 25

# size of a pixel in micrometer
pixel_to_scale <- 1000/240

The segmentation of the videos is based on the “difference image segmentation” using a constant offset of frames, and a user-specified threshold. For example, an offset of 10 means that frame 1 will be subtracted from frame 11, 2 from 12, and so on. The threshold then specifies which the pixels are considered as foreground when the image is binarized. If the difference.lag is set to zero, no difference image is created and the threshold is directly applied. Please refer to Pennekamp & Schtickzelle (2013) for further information. Some trial-and-error is required to find appropriate segmentation values on new videos.

# use difference image segmentation
difference.lag <- 10

# uncomment the next line to use only threshold based segmentation
# difference.lag <- 0

# specify threshold
thresholds = c(20,255)

The offset should be adjusted according to the frame rate and the speed of the moving individuals. If individuals move very slowly or the frame rate of the video is very high, a larger offset is required.

The first threshold value (10 in the example above) should be low if the contrast between background and individuals is low, naturally increasing the segmentation of background, whereas a clear contrast between background and individuals can allow a higher threshold, which more effectively will eliminate unwanted background detected as foreground. The second threshold value (255 in the example above) should not be changed.

The check_threshold_values() function assists in finding good threshold settings by loading a selected video into ImageJ, performing the difference image procedure with the defined offset and then opening the threshold settings. The user can now interact by changing the threshold and directly observing its effect on the foreground detection (shown in red).

We have to specify the path to the raw data, the directory where the ImageJ macro is saved, the video selected for processing (vid_select = 0 corresponds to the first video in the directory), the thresholds, the path to ImageJ and finally the memory available to ImageJ. Be patient when you run this function: it will open ImageJ and then you may have to wait a little while to load the differenced video. When loaded, you can select the threshold values by using the threshold box.

check_threshold_values(to.data, raw.video.folder, 
    ijmacs.folder, 0, difference.lag, thresholds, 
    IJ.path, memory.alloc)

The screen shot shows the open video in ImageJ and the menu to adjust the threshold.

The video file open in ImageJ and the menu to adjust the threshold is shown. A setting of 5 seems too low, because the large background areas in the lower right corner get identified as foreground (smaller protists shown in red.)

The video file open in ImageJ and the menu to adjust the threshold is shown. A setting of 5 seems too low, because the large background areas in the lower right corner get identified as foreground (smaller protists shown in red.)

Playing with the threshold shows that a setting of 10 is too low, because much of the background get detected as foreground. Increasing the threshold to 10 however discriminates much better foreground from background. When you are finished determining the threshold, close the ImageJ application.

Adjusting the threshold to 10 leads to a better discrimination between foreground and background.

Adjusting the threshold to 10 leads to a better discrimination between foreground and background.

When the adjusted threshold differs from the previous guess, it has to be modified in R:

thresholds = c(10,255)

Thoroughly testing and checking the offset and the threshold is crucial for meaningful particle detection. The user should try to find the optimal signal-to-noise ratio between the individuals to be tracked and background noise. Although bemovi contains a function to filter particles later in the workflow, users should prefer to work hard to get a good threshold and offset, since not doing so can create problems (such as 1000s of particles detected which makes linking them computationally prohibitive) that cannot be fixed by later filtering.

Identify and track particles

This step is divided into two parts: (1) segment, locate and measure the particles using the previously found offset and threshold (locate_and_measure_particles() function), (2) reconstruct the movement trajectories and calculate metrics like step length, turning angle and speed (link_particles() function).

First we run the locate_and_measure_particles() function:

locate_and_measure_particles(to.data, raw.video.folder, 
    particle.data.folder, difference.lag, 
    thresholds, min_size = 5, max_size = 1000, 
    IJ.path, memory.alloc)
Sys.sleep(60)

The arguments of the function are the path to the raw videos (to.data, raw.video.folder), the directory where the output is saved (particle.data.folder), the segmentation settings (i.e. the previously determined offset [=difference.lag]) and the thresholds) and then size restrictions (min and max size in pixel!) of the foreground to be considered particles. Finally, the ImageJ path and memory are specified as previously described.

The function generates a separate text file for each video, containing the location (X- and Y-coordinates in pixels), and measurements (in pixels) for each particle in each frame of a given video. Each file is saved to the particle.data.folder and then all are merged into the particle.RData file for further processing. The processing of the demo videos took about 6 minutes on our test machine. Expect to see ImageJ doing lots of work; during this you will not be able to interact with R. If you wish to halt the process, you need to stop ImageJ (by quitting it for example); this will release R back to you.

Screenshot of the particle.data.folder directory with the ParticleAnalyzer output (one file opened to show the measurements) and the particle.RData file containing the merged data

Screenshot of the particle.data.folder directory with the ParticleAnalyzer output (one file opened to show the measurements) and the particle.RData file containing the merged data

Based on the X- and Y-coordinates extracted by the locate_and_measure_particles() function, we can then re-construct the trajectory of each particle through time. This is done with the link_particles() function, which will run the ParticleLinker on each video and then save the results in the trajectory.RData file.

link_particles(to.data, particle.data.folder, 
    trajectory.data.folder, linkrange = 5, 
    disp = 20, start_vid = 1, memory = memory.alloc)
Sys.sleep(60)

The link_particles() function takes as arguments the path to the particle measurements (to.data, particle.data.folder) and as output directory the trajectory.data.folder. The two arguments linkrange and disp are crucial in determining the tracking results. The link range specifies the number of adjacent frames to be considered, when a particle temporarily disappears from the video (e.g. due to detection issues), whereas the displacements specifies the maximum distance a particle can move between two frames. Both arguments should be carefully validated with a set of test videos, before they can reliably be used for larger sets of videos. Importantly, depending on the speed and movement characteristics of the target species, the displacement has to be adjusted to prevent erroneous trajectories.

Screenshot of the trajectory.data.folder directory with the ParticleLinker output (one file opened to show X- and Y-coordinates, the frame (= time) and the trajectory ID) and the trajectory.RData file containing the merged data

Screenshot of the trajectory.data.folder directory with the ParticleLinker output (one file opened to show X- and Y-coordinates, the frame (= time) and the trajectory ID) and the trajectory.RData file containing the merged data

When you run this function, R shows some progress information.

It is important to note that the tracking can take a lot of time (i.e., days), if many individuals (1000s) are to be tracked simultaneously on long video sequences.

Merge data

Finally, this step merges the data extracted from the video in the previous steps (i.e. particle.RData and trajectory.RData) into one database (Master.RData) and links the videos with their respective experimental description. The information about the videos should be saved in tab-delimited text file, which contains the video file name (without its ending e.g. Data0001) as identifier, and thus has to be unique throughout an experiment. The name of this description file (e.g. video.description.txt) and location (0 - video description) were previously defined in the directory structure.

merge_data(to.data, particle.data.folder, 
    trajectory.data.folder, video.description.folder, 
    video.description.file, merged.data.folder)