Research
(2025) Interpretable Model for temporal attribution in time-series data
Developed TimeSliver
, an interpretable deep learning model that integrates raw and symbolically binned time-series data to capture temporal interactions and compute temporal attribution scores, achieving a 11% performance improvement over state-of-the-art explainable methods.
(2025) Interpretable Model for monomeric attribution in protein sequences
Developed an interpretable deep learning model, COLOR
, that transforms higher-dimensional protein sequences into a lower-dimensional interpretable representation to estimate the contribution of each monomer to a given property. COLOR
achieves 22% higher explainability than the existing gradient- and attention-based methods.
(2025) EMG-to-Text conversion with LLMs
Developed an LlaMA 3
-based model to convert surface electromyography (EMG) signals, which capture muscle activations, into speech. On a closed vocabulary task, our model achieves approximately 20% lower word error rate (WER) compared to specialized models.
(2024) Predictive model for spider silk’s mechanical property
Developed an interpretable feature-based deep learning framework to predict the properties of spider silk and identify important motifs in a data-constrained setting. We showed that using the B-factor as a motif descriptor improves prediction performance by 15% compared to traditional descriptors such as hydrophobicity, charge, and others.
(2023) B-factor prediction in proteins
Developed a many-to-many LSTM model to predict the B-factor (atomic flexibility) of alpha-carbon atoms in proteins, achieving a 30% improvement over the CNN-based state-of-the-art model. Analysis revealed that atoms within 15 Å contribute most significantly to B-factor values.
(2023) Audio-based emotion prediction
As part of an ACM Multimedia Challenge, we developed an emotion prediction model based on an audio foundation model. We found that using HuBERT-Large
as the audio foundation significantly improved performance by 4%.
(2023) Person identification based on the biosignals
As part of the ICASSP’23 Challenge, we developed a wav2vec-based deep learning model to identify individuals based on their biosignals, securing 3rd place. We employed a late fusion strategy to effectively handle both time-varying and static features.