# Mathematics

I am interested in the analysis and the development of optimisation methods
for inverse imaging problems. My research touches such areas of mathematics
as **optimisation**, **geometric measure theory**, **variational analysis**,
and **partial differential equations**. A lot of my research provides new
insight into **functions of bounded variation** and deformation.

In **inverse problems** research, lacking unique solutions, one usually models
prior assumptions of what a good solution is, through a **regulariser**. One can
then attempt to reconstruct an image – such as a photograph or a medical,
biological, or an astronomical image – by minimising an objecive consisting
of a fidelity term, involving data *f*, and the regulariser.
A popular and prototypical regulariser in image
processing is total variation, or the (distributional) one-norm of the
gradient.
This is mainly due to its edge-preserving properties, although it has
several drawbacks, such as the stair-casing effect. Recently, therefore,
higher-order and curvature-based regularisers have, such as **total generalised
variation** have received increased attention due to their
better visual qualities. Very little is known about them analytically.
Yet this would be desirable from the point of view of **reliability** of
the techniques in practise, to show that no new artefacts are introduced,
and that desired features are restored correctly. *I do mathematical
analysis on this kind of questions.*

A photograph taken with current state-of-the-art digital cameras has
between 10 and 20 million pixels. Some cameras, such as the
semi-prototypical Nokia 808 PureView have up to 41 million sensor pixels.
Some medical imaging data sets, such as full three-dimensional diffusion
tensor (DTI) or HARDI images also contain tens of millions of data
points. The resulting optimisation problems for the improvment of such
images, or for reconstructing them from **partial data**, are **huge**,
and computationally very intensive. Moreover, the aforementioned
state-of-the-art regularisers are generally non-smooth, which causes
difficulties in the application of conventional optimisation methods.
State-of-the-art image processing techniques based on mathematical
principles are only up to processing tiny images in real time.
A question that interests me, is *whether we can design optimisation
algorithms that would make the processing of real high-resolution
images fast?*

If you wish to learn more about these topics, please see my publications.