cv

Basics

Name Caroline Mazini Rodrigues
Label PhD student
Email caroline.mazinirodrigues@esiee.fr
Url https://carolmazini.github.io/
Summary Caroline is a PhD student of ESIEE Paris and EPITA being a member of the laboratories LIGM and LRE. She works with explicability of Deep Neural Networks.

Work

Education

  • 2020 - Present

    France

    PhD
    ESIEE Paris (Université Gustave Eiffel)
    Explainable Artificial Intelligence
  • 2018 - 2020

    Brazil

    MSc
    Universidade Estadual de Campinas (Unicamp)
    Machine Learning
  • 2013 - 2017

    Brazil

    Bachelor
    Universidade Estadual Paulista Júlio de Mesquita Filho (UNESP)
    Computer Science

Teaching

  • 2023 - Present
    Université Gustave Eiffel
    Teaching assistant
    • Image Processing (Master)
    • Algorithms and Programming (Bachelor)
    • C Programming (Bachelor)
    • Databases (Bachelor)
  • 2021 - 2023
    EPITA - École d'Ingénieurs en Informatique
    Teaching assistant
    • Python for Big Data (Master)
    • Introduction to Neural Networks (Master)
    • Mathematics of the signal (Master)
    • Rational Languages Theory (Bachelor)
    • Algorithms Complexity (Bachelor)
  • 2019 - 2020
    Unicamp
    Teaching assistant
    • Complex data mining regarding information retrieval learning (Specialization)
    • Complex data mining regarding supervised learning (Specialization)
    • Complex data mining regarding unsupervised learning (Specialization)
    • Algorithms and computer programming (Bachelor)

Awards

Publications

  • 2024
    Bridging Human Concepts and Computer Vision for Explainable Face Verification
    Workshop at AI*IA
    In this paper, we present an approach to combine computer and human vision to increase the explanation's interpretability of a face verification algorithm. In particular, we are inspired by the human perceptual process to understand how machines perceive face's human-semantic areas during face comparison tasks.
  • 2024
    Transforming gradient-based techniques into interpretable methods
    Pattern Recogniton Letters - Elsevier
    We introduce GAD (Gradient Artificial Distancing) as a supportive framework for gradient-based explainable techniques. Its primary objective is to accentuate influential regions by establishing distinctions between classes. The essence of GAD is to limit the scope of analysis during visualization and, consequently reduce image noise.
  • 2024
    Unsupervised discovery of Interpretable Visual Concepts
    Information Sciences - Elsevier
    In this paper, we propose two methods, Maximum Activation Groups Extraction (MAGE) and Multiscale Interpretable Visualization (Ms-IV), to explain the model's decision, enhancing global interpretability. MAGE finds, for a given CNN, combinations of features which, globally, form a semantic meaning, that we call concepts.
  • 2024
    Reasoning with trees: interpreting CNNs using hierarchies
    Arxiv
    In this paper, we propose a framework to construct model-based hierarchical segmentations that maintain the model's reasoning fidelity and allows both human-centric and model-centric segmentation. This approach offers multiscale explanations, aiding bias identification and enhancing understanding of neural network decision-making. Experiments show that our framework, xAiTrees, delivers highly interpretable and faithful model explanations, not only surpassing traditional xAI methods but shedding new light on a novel approach to enhancing xAI interpretability.
  • 2021
    Manifold Learning for Real-World Event Understanding
    IEEE Transactions on Information Forensics and Security
    We extend upon our prior work and present a learning-from-data method for dynamically learning the contribution of different components for a more effective event representation. The method relies upon just a few training samples (few-shot learning), which can be easily provided by an investigator.
  • 2020
    Forensic Event Analysis: From Seemingly Unrelated Data to Understanding
    IEEE Security & Privacy
    We discuss the problem of restructuring visual data from different heterogeneous sources to analyze an event of interest. We present X-coherence: a pipeline seeking to organize and represent pieces of data, tying them coherently with the real world and with one another. We also outline research challenges while seeking X-coherence.
  • 2019
    Image Semantic Representation for Event Understanding
    IEEE International Workshop on Information Forensics and Security
    We propose an image semantic representation method that helps to understand the discrimination of Representative Images (RI) from Non-representative Images (NRI). Our method, called Event Semantic Space (ESS), generates a low-dimensional image representation by exploiting the semantics of some images with high representativeness and some representative components of the events (e.g., places, objects, and people).

Skills

Computer Science
Explainable Artificial Intelligence
Deep Learning
Machine Learning
Computer Vision
Programming
Python
Pytorch/Tensorflow
C

Languages

Portuguese
Native speaker
English
Fluent
French
Intermediate
Spanish
Basic

Interests

Machine Learning
Interpretability
Explainability
Deep Learning
Representation Learning
Feature Engineering
Supervised / Unsupervised / Semi-supervised Learning
Generative AI
Multimodal learning
Data mining
Multimodal data analysis
Pattern Recognition
Information Retrieval
Content-Based Image Retrieval
Ranking Aggregation
Contextual Rankings

References

Professor Laurent Najman
LIGM - Université Gustave Eiffel (PhD supervisor)
Nicolas Boutry
LRE - EPITA (PhD supervisor)
Professor Zanoni Dias
IC -Unicamp (MSc supervisor)
Professor Anderson Rocha
IC - Unicamp (MSc supervisor)