Close this search box.
Close this search box.


Interpretable video analysis with the use of deep learning techniques. 

Project conducted in collaboration with Massachusetts Institute of Technology.

Project description

The project aims to investigate whether classifiers trained on short videos are more resistant to disturbance data (robust) than classifiers trained on single images.

The main driving force of the research is the existence of ‘adversarial attacks’. Adversarial attacks rely on the fact that, while imperceptible to the human eye, alterations to input data can result in misclassification, which is a serious security gap in systems based on artificial intelligence.

The more advanced variants of these attacks do not even require detailed knowledge of architecture of the classifier or access to training data, which can result in e-mails with a competitor’s logo being classified as spam by a significant proportion of e-mail boxes. This is a serious obstacle to the spread of intelligent systems. MIM Solutions is responsible for designing, implementing and testing appropriate algorithms in terms of their resistance to data disturbances, in particular to adversarial attacks.

The project will conclude with the publication of a research paper at the end of 2022.

The latest machine learning algorithms are making advances in solving problems that we previously thought were the exclusive domain of humans - tasks such as speech and object recognition.

Share this:

Get in touch

MIM Solutions can help develop AI in your company, please contact us and we’ll talk you through it.