For the past five years prof. Dinov has been working on designing,
testing and documenting mathematical and statistical models for
studying and analyzing medical images and other natural phenomena.
In particular, he is involved in 7 projects modeling Human Brain
Functional and Anatomical data using Discrete Wavelet/Fractal
and one Optimization project ;
Project 1: We have developed the first fully stochastic
Anatomic Sub-Volume Probabilistic Atlas (F&A SVPA) for the elderly
Disease (AD) patients. This atlas allows us early diagnosis, prognosis
planning of treatment for AD subjects, based on data of their blood
and brain anatomy. (F&A
Project 2:, deals with quantifying (numerically)
the neurological and topological differences and similarities between
of (MRI, fMRI, PET, CT) brain scans. We were able to design metrics on
space of Fractal/Wavelet Transforms of signals, that help us make
distinctions between equivalent medical images, using their transforms.
The theoretical function estimation schemes we introduced have been
develop an algorithm and a computer implementation for an automatic
robust approach to quantifying warp performance. This software package
called "Wavelet Analysis of Image Registration"
In Project 3, we develop a new technique for determining
significant metabolic variations in
single/multi-subject human brain functional studies. The new method,
Sub-Volume Thresholding (SVT),
models the difference images as "locally" stationary
Gaussian random fields. Thus adding more flexibility to the commonly
"globally" stationary random approaches. Our model naturally encounters
of continuous functions we showed induces a family of permissible
matrices (valid covariograms). Using the SVT technique we are trying to
identify local perfusions and differences in groups of; left vs right
motor studies; amnesia vs memory-retrieval deficit
AD (Alzheimer's disease) patients;
and groups of hallucinations vs delusion patients.
If we wish to compare two images and identify corresponding
(or regions of activation, for functional data) we need to use a
technique to deform one of the images to an image similar to the second
This brings up the question of "What kind of deformation should we
In Project 4, we constructed a mathematical model (based on
that helps classifying warps and warping techniques.
Segmentation of medical images is the topic of Project 5.
Using the discrete
dynamical system induced by our fractal transform we designed a
algorithm. The two major goals in brain image segmentation are:
regions of high concentration of White Matter, Grey Matter and CSF
Spinal Fluid); and Reducing the data complexity and dimensionality.
Our models, and our metrics, turn out also to be useful for
In Project 6, we compared the current state-of-the-art
techniques for image zoom in, to the novel Fractal magnification
algorithms. We were
able to show that our model outperforms the interpolation method in
aspects. Blowing up images using their fractal transforms reveals more
(at lower resolution) and avoids the smearing and blurring effects of
Fractal-like transformations could be used for automatic pattern
and feature extraction. Project 7 deals with a simple
of such techniques. We are able to show that a decent pattern
algorithm could be used for image registration and alignment - a very
tool for image comparison.
My work in the Optimization project includes developing,
testing algorithms for solving min/max, linear/non-linear
problems/systems/inequalities. Using Subdivision Traversing and other
topological algorithms we introduce a class of simple, fast and robust
algorithms for function optimization.
The casting problem serves as a motivation in this
When casting an air-plain wing, for example, there are a number of
variables (like: Temperature, Pressure, Flow velocity, alloy
etc.) and a list
of output characteristics (like: Strength, Number of voids, etc.).
is to increase the strength of the wing, decrease the number of bubbles
etc, without actually knowing the function connecting the two types of
variables. Currently, this problem is approached by some sort of
(or random) selection of test points (input variables), conducting an
experiment and observing the output. We have designed an algorithm,
solves an optimization problem to optimize the search for the "right"
based on the previously obtained functional values at prior test