Aylin Caliskan

My first name is pronounced as Eye-lin

Assistant Professor
Department of Computer Science
Institute for Data, Democracy, and Politics
George Washington University     SEH 4590
  aylin@gwu.edu        @aylin_cim


My research interests lie in artificial intelligence (AI) ethics, AI bias, computer vision, natural language processing, and machine learning, with a strong interest in human centered computing. I study the impact of machine intelligence on society, especially threats to fairness, privacy, and democracy. I investigate the reasoning behind biased AI representations and decisions by developing explainability methods that uncover and quantify human-like biases learned by machines. Building these transparency enhancing algorithms involves the use of machine learning, natural language processing, and computer vision to interpret AI's co-evolution with society and gain insights on artificial and natural intelligence.


  • I look forward to teaching Machine Learning in Fall 2021.
  • My paper on AI bias is published in Science. Semantics derived automatically from language corpora contain human-like biases.
    source code
  • I am the moderator of Computer Science - Computers and Society on arXiv.
  • Research 2021

  • Akshat Pandey and Aylin Caliskan
    Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021)
  • Wei Guo and Aylin Caliskan
    Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021)
  • Ryan Steed and Aylin Caliskan
    Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
    The 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2021)
  • Ryan Steed and Aylin Caliskan
    A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of Appearance Bias in the Wild
    AI and Ethics, 2021
  • Autumn Toney, Akshat Pandey, Wei Guo, David Broniatowski, and Aylin Caliskan
    Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter
    15th IEEE International Conference on Semantic Computing (ICSC 2021)