Atomic Force Microscopy Data Analysis
Local properties of materials with nanometer resolution can be probed by means of atomic force microscope, performing force spectroscopy experiments. Force-distance (FD) curves contain valuable information about nanoscale material properties such as adhesion, elasticity and plasticity, as well as friction.
The huge amount of information which can be extracted from a single experiment may require a valuable computational effort in order to reconstruct physically valuable parameters from comparison with contact models.As a result a fast and easy analysis relying on these dynamic methods is still far to be routinely implemented to spectroscopy maps or it is limited to few information.
The goal of this project is to go one step forward in this direction, by applying Pattern Recognition techniques for analyzing these highly dimensional data, in order to visualize and to discover peculiarities of the analyzed samples.
We will start with standard techniques increasing then the complexity up to more complex data reduction techniques or clustering methodologies.
Cluster Analysis for Drug Discovery
We investigate how cluster analysis techniques may help to describe and characterize small organic molecules. These molecules exist in aqueous solvent at different conformations characterized by the presence of multiple, low energy, interconverting states separated by possibly large energetic barriers. The analysis of such systems is generally based on experimental techniques coupled with physics-based computer simulations. This amounts to estimating the probability density of the conformational states, viewed as a “landscape” in a coordinate space representing the relevant degrees of freedom of the system. Traditional methods relying on histograms of molecular dynamics or Monte-Carlo samples may become very time-consuming when increasing the dimension of the system, requiring huge datasets for statistical accuracy. Furthermore, these methods do not provide a direct estimation of the free-energy basins where the conformations are most stable.
Mobile Sensing and Navigation
The project is aiming at providing a low cost and a robust mobile sensing system. The system uses of-the-shelf low cost robots, endowed with basic sensing/computational capabilities. The multiplicity of vectors allows fast executions by performing distributed tasks and lower global failure risks by distributing breakdown probabilities.
The mobile sensing will achieve co-operative and optimal tasks such as:
Molecule 3D Localization for Super-resolution and Nanoscopy
This project aims to provide computational tools for 3D super-resolution to the LAMB group of the Nanophysics department.
One of the outcomes of this collaboration is a new procedure for the analysis of thick biological specimen (50–150 μm) by coupling far-field individual molecule localization with selective plane illumination microscopy (SPIM). This made possible to obtain a lateral localization precision of less than 35 nanometers and an axial localization precision of 65-140 nanometers depending on the characteristics of the sample.
In such context, PAVIS provides the computational tools used to perform the 3D localization of the molecules by analyzing the large number of images presenting the stochastic photoactivation of the individual molecules. Using image processing tools and non-linear regression over the given Point Spread Function of the activated molecules, it is possible to determine the position with a nanometric accuracy.
The overall process has a particular computational weight since the number of images to analyze may easily reach the thousands. Ongoing collaborations will aim to boost the computational tools for the Nanophysics department in terms of computational efficiency and accuracy. In particular, we are studying approaches to go towards live imaging of thick biological specimen which will provide a considerable impact on the way life scientist will analyze biological samples.
Multi-modal Fusion of Video and Thermal Data
In the last few years, the diffusion of new imaging modalities has improved the reliability of the automated surveillance systems. In particular, far infrared or thermal imaging is able to efficiently cope with working conditions that limit the use of visible imaging devices, such as night-time or adverse weather. Moreover, thermal imaging is less affected by lighting conditions and provides enhanced contrast between human bodies and their environment. The most widespread approaches for automatic pedestrian detection, devised for applications as automated surveillance in public places or drive assistance, rely on single modality images, either using the visible spectrum or using another modality such as near-infrared or far-infrared technologies. However, as thermal and visible imaging bring complementary information of the same scene, their combination, or multi-modality fusion, can achieve an increased robustness in the detection task, allowing inferences that cannot be obtained from a single sensor or source, or whose quality exceeds that of an inference drawn from any single source.
Sensor Networks are widely spread in our society and they will represent a pervasive way to monitor the activity of a given area.
Such networks may span from a set of few sensors in a room to a metropolitan network of sensors deployed in a city. As a striking example, the single CCTV network of the London Metropolitan Area has approximately 200000 cameras to manage. With these numbers, even the simple 3D localisation of the devices might be an unbearable task for any operator.
At this end, PAVIS is working actively in the self-localisation of heterogeneous sensor Networks (mainly video, range and audio). Our aim is to find methods that can automatically estimate the position of the sensors with the least number of assumptions about their position as possible. Moreover, we make an explicit use of the multi-modality of the sensors (i.e. video plus audio) in order to disambiguate situations where a single modality would fail. In the case of a single modality, we have already provided compact closed form solutions for the case of microphones deployed in an area and arbitrary sound events with both unknown positions. Moreover, the approach scales very well with arbitrary number of microphones thus leading to the self-calibration of arbitrary large networks.