Aylin Caliskan

My first name is pronounced as Eye-leen

Assistant Professor
Information School
University of Washington
Nonresident Fellow in Governance at Brookings
  aylin@uw.edu        @aylin_cim


My research interests lie in artificial intelligence (AI) ethics, AI bias, computer vision, natural language processing, and machine learning, with an emphasis on trustworthiness and responsibility in AI. I study the impact of machine intelligence on society, especially threats to privacy and equity. I investigate the reasoning behind biased AI representations and decisions by developing explainability methods that uncover and quantify human-like biases learned by machines. Building these transparency enhancing algorithms involves the use of machine learning, natural language processing, and computer vision to interpret AI's co-evolution with society and gain insights on artificial and natural intelligence.


  • I am moving to the University of Washington at the end of summer 2021.
  • My paper on AI bias is published in Science. Semantics derived automatically from language corpora contain human-like biases.
    source code
  • I am the moderator of Computer Science - Computers and Society on arXiv.
  • Research 2021

  • Robert Wolfe and Aylin Caliskan
    Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models
    Empirical Methods in Natural Language Processing (EMNLP 2021)
  • Autumn Toney-Wails and Aylin Caliskan
    ValNorm Quantifies Semantics to Reveal Consistent Valence Biases Across Languages and Over Centuries
    Empirical Methods in Natural Language Processing (EMNLP 2021)
  • Aylin Caliskan
    Detecting and mitigating bias in natural language processing
    Brookings 2021
  • Akshat Pandey and Aylin Caliskan
    Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021)
  • Wei Guo and Aylin Caliskan
    Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
    AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AAAI/ACM AIES 2021)
  • Ryan Steed and Aylin Caliskan
    Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases
    The 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT 2021)
  • Ryan Steed and Aylin Caliskan
    A Set of Distinct Facial Traits Learned by Machines Is Not Predictive of Appearance Bias in the Wild
    AI and Ethics, 2021
  • Autumn Toney, Akshat Pandey, Wei Guo, David Broniatowski, and Aylin Caliskan
    Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter
    15th IEEE International Conference on Semantic Computing (ICSC 2021)