My reasearch focuses on exploring and analyzing the underlying causes behind human movement. My belief is that language and movement are entwined. Hence, I aim towards understanding the emotions, goals, and plans of people in multimodal environments. To address this, I am combining language with human movement. I am supervised by MPI for Intelligent Systems Director Michael Black.
Doctor of Philosophy (Ph.D.) (September 2019 - now)
Max Planck Institute for Intelligent Systems
Advisor: Michael Black
Research Assistant in SLP-NTUA Laboratory (May 2017 - June 2019)
Cognitive and affective state modeling of natural language.
Top-ranked participation in SemEval 2018 challenge participation involved with Affect in Tweets, Task, Emoji Prediction for Tweets , Irony Detection in English Tweets.
Exploration of cross-topic distributional semantic representations for NLP.
Deep/Machine Learning Engineer (DeepLab, Athens) (September 2017 - November 2018)
Reproduced networks based on research papers, investigating parameters' effect on their performance, participated in building large-scale experimental pipelines.
Development of NLP preprocessing, training, testing pipeline for end-to-end solutions for sentiment and topic classification tasks. Employed models based on state-of-the-art papers (hierarchical LSTMs, CNN networks) using TensorFlow.
Implementation of client framework for communicating crowdsourcing REST API(figure-eight) to create datasets for various modalities
Examination of distributed training and evaluation and its scalability for optimal performance on multiple devices/GPUS.
Diploma in Electrical and Computer Engineering (September 2012 - July 2019)
In Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)), June 2019 (inproceedings)
In traditional Distributional Semantic Models (DSMs) the multiple senses of a polysemous word are conflated into a single vector space representation. In this work, we propose a DSM that learns multiple distributional representations of a word based on different topics. First, a separate DSM is trained for each topic and then each of the topic-based DSMs is aligned to a common vector space. Our unsupervised mapping approach is motivated by the hypothesis that words preserving their relative distances in different topic semantic sub-spaces constitute robust semantic anchors that define the mappings between them. Aligned cross-topic representations achieve state-of-the-art results for the task of contextual word similarity. Furthermore, evaluation on NLP downstream tasks shows that multiple topic-based embeddings outperform single-prototype models.
In International Conference on Computational Linguistics (COLING) , August 2018 (inproceedings)
Neural activation models that have been proposed in the literature use a set of example words for which fMRI measurements are available in order to find a mapping between word semantics and localized neural activations. Successful mappings let us expand to the full lexicon of concrete nouns using the assumption that similarity of meaning implies similar neural activation patterns. In this paper, we propose a computational model that estimates semantic similarity in the neural activation space and investigates the relative performance of this model for various natural language processing tasks. Despite the simplicity of the proposed model and the very small number of example words used to bootstrap it, the neural activation semantic model performs surprisingly well compared to state-of-the-art word embeddings. Specifically, the neural activation semantic model performs better than the state-of-the-art for the task of semantic similarity estimation between very similar or very dissimilar words, while performing well on other tasks such as entailment and word categorization. These are strong indications that neural activation semantic models can not only shed some light into human cognition but also contribute to computation models for certain tasks.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems