Making it up (Part 1)

Whenever you’re testing a new analysis protocol or playing around with some software, it’s always handy to have some sample data to mess with. But what if you don’t yet have the data, or what if you need more, or need more specific data? In this post, we’re going to delve into the world of synthetic data by making a sample tracking data set.

We’re going to make a dataset using Fiji, to use for practising tracking. To do this, you have to understand a bit about how particles move under the influence of Brownian Motion. The maths is relatively simple and we’ll take it one step at a time. We’ll also be using the Fiji Macro Editor, so open it up by running [Plugins > New > Macro] and then selecting IJ1 Macro from the language bar.

2016-07-MakingItUp_001

An artist needs a canvas

We’re going to build a 2D timecourse, so to do that you need an image. To make things easier in the long run, we’re going to parameterise our variables. Add the following lines to your script (double slashes are comments and are ignored by imageJ):

imageDim=512;     //-- Set image dimension in pixels
pixelSize=0.1;     //-- What is the pixel size in um/pixel
timeFrames=100;  //-- How many time frames?

newImage("Tracks", "8-bit grayscale-mode black", imageDim, imageDim, 1,1, timeFrames);

This is the same as running the command [File > New > Image…] command from the toolbar. The comma-separated options above are Image Name, Mode, width, height, channels, slices and time frames.

You should also add the following line to calibrate the image:

run("Properties...", "unit=um pixel_width="+pixelSize+" pixel_height="+pixelSize+" voxel_depth=1 frame=[0.05 sec]");

Note that here, because the parameters are given as a string, we have to concatenate the strings and variables using double quotes and plus signs.

The pixel size used is roughly equivalent to imaging at 150x on a modern CCD camera. It’s important to calibrate the image as we’re going to use real world-values to describe the particle motion.

Where to start

Now that we have our canvas, we need a starting point for our track (we’ll get onto multiple tracks later). We’re going to use a random starting point. But first we’ll need an array in which to store our coordinates. Add the following lines to create two arrays:

posX=newArray(timeFrames+1);
posY=newArray(timeFrames+1);

Notice here that we’re using the varible timeFrames again. If you want to change the number of frames in your movie, you only have to do it in one place (at the top) and it will propagate throughout. Yay for variables! Finally to generate a starting position and store it in position 1 of the array use:

posX[1]=floor(random*imageDim);
posY[1]=floor(random*imageDim);

This simply picks a random number between 0 and 512 (random*imageDim) and rounds it down to the nearest pixel (floor). Let’s go ahead and draw that onto the image.

makeOval(posX[1], posY[1], 5, 5);     //-- Make a selection in the starting position using a 5 pixel circle
run("Draw", "slice");     //-- Draw the point into the image (using foreground colour)
run("Select None");     //-- Deselect everything

2016-07-MakingItUp_002

Ignore for the time being that there’s a hole in the spot. We’ll fix that at the end, I promise.

Doin’ the random walk

Really, that was the easy part. What we now need to do is predict how a particle moves by Brownian Motion for each time point of our movie. As the X and Y dimensions are independent, we’ll start with just the X.

The position of a particle moving by pure Brownian diffusion, relative to the starting position, can be predicted based on:

  1. The amount of time that has passed (dT)
  2. The diffusion coefficient of the particle (D)
  3. A random element

Given the starting X coordinate posX[1], we can calculate an X position after one time step using

posX[2] = posX[1] + ( sqrt(2*D*dT) * randn() );

Importantly, this relies upon one other function to generate a normally-distributed random number randn(), which, unfortunately ImageJ does not have.

A quick aside on random numbers

A uniformly random number is one where the likelihood of any number coming up is equal. Tossing a coin or rolling a die are examples of uniform random number generators. A frequency distribution (for example of a die roll) would look like this:

2016-07-MakingItUp_003

I quickly rolled a million dice, to prove a point.

Normally distributed random numbers have a distribution of the classic bell curve where the likelihood of picking a number near the mean is greater than that of picking one on the outskirts. Take for example, the results of rolling two dice and summing the values:

As any backgammon player will tell you, the most likely result is 7.

As any backgammon player will tell you, the most likely result is 7.

It is this latter distribution that we want because of the nature of Brownian motion, whereby there is less likelihood of finding a particle the further we go from its starting point.

Back to the script

To turn a uniform random number generator into a normally distributed random number generator, we’re going to use the surprisingly simple Box-Muller Transformation. To do this add a function to the bottom of your script:

//-- Functions
function randn() {
    //-- Use the Box-Muller method to produce normally distributed random numbers
    U=random();
    V=random();
    return sqrt(-2*log(U))*cos(2*3.14*V);
}

Now we can call randn() and have a normally distributed random number returned. Don’t believe me? Here are another million random numbers produced this way with a gaussian fit:

2016-07-MakingItUp_005

OK, this is all the theory out of the way, so the only thing left to do is to apply this transformation for every step of the script. If you’ve been playing along you should have something a bit like the code below. I’ve jumped ahead a bit and parameterised D, dT (note that are now up at the top), and included a for loop to run through every image and calculate our single track. Last but not least, I’ve added a final line once the track has been made, to fill in the circles (I did promise after all).

imageDim=512;     //-- Set image dimension in pixels
pixelSize=0.1;    //-- What is the pixel size in um/pixel
timeFrames=500;   //-- How many time frames?
D=0.1;            //-- Diffusion Coefficient (um^2/s)
dT=0.05;          //-- Time Interval (s)

newImage("Tracks", "8-bit grayscale-mode black", 512, 512, 1,1, timeFrames);
run("Properties...", "unit=um pixel_width="+pixelSize+" pixel_height="+pixelSize+" voxel_depth=1 frame=["+dT+" sec]");

//-- Create arrays to hold the coordinates
posX=newArray(timeFrames+1);
posY=newArray(timeFrames+1);
//-- Set starting coordinates
posX[1]=floor(random*imageDim);
posY[1]=floor(random*imageDim);

makeOval(posX[1], posY[1], 5, 5);     //-- Create the selection
run("Draw", "slice");                 //-- Draw the point into the image (using foreground colour)
run("Select None");                   //-- Deselect everything

//-- Make the rest of the track
for (j=2;j<timeFrames+1;j++){

//-- Calculate the new position
posX[j]=posX[j-1]+(sqrt(2*D*dT)*randn());
posY[j]=posY[j-1]+(sqrt(2*D*dT)*randn());

setSlice(j); //-- Select the correct frame

makeOval(posX[j], posY[j], 5, 5);    //-- Create the selection
run("Draw", "slice");                //-- Draw the point into the image (using foreground colour)
run("Select None");                  //-- Deselect everything
        
} // j loop (timepoints)

run("Fill Holes", "stack");    //-- Fill the holes in the stack

//-- Functions
function randn() {
    //-- Use the Box-Muller method to produce normally distributed random numbers
    U=random();
    V=random();
    return sqrt(-2*log(U))*cos(2*3.14*V);
}

Taking reality for a ride

There are a couple of other things you may want to consider if you’re using these data for tracking practice. Firstly, the blobs are binary: black and white. This is neither very realistic nor particularly helpful for the feature identification. We can sort that out by adding in a Gaussian blur. Add the following line under the Fill Holes line (above the function).

run("Gaussian Blur...", "sigma=2.5 stack");

I’ve chosen a blur value equal to half the diameter of our particles, but you can set it to anything you please. Finally, any detection system will have some noise associated with the system. You can add some background noise on top of your particle with:

run("Add Specified Noise...", "stack standard=10");

Again, a somewhat arbitrary choice of 10 for the noise. Run it all through and you should end up with something like this:

Tracks-1

If you’re underwhelmed, bear in mind that the values for D and dT are realistic, so the motion you get is feasible (based on those numbers at this magnification). If you want to go a bit mad, just try increasing these values.

This is the framework for making up some data on which to practise tracking or just stare at adoringly. To add more particles you can just add a loop that wraps almost the whole script. Save the fills, noise and gaussian blurs for the end.

Advertisements

2 thoughts on “Making it up (Part 1)

  1. Pingback: Show me the data! | Post-Acquisition

  2. Pingback: Random Access MemoROI | Post-Acquisition

Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.