Publications
- A Large Scale Social Web Audit of AI Generated Text Detection Systems. Arka Dutta, Utkarshani Jaimini, Utkarsh Bhatt, Sara Shree Muthuselvam, Amitava Das, and Ashiqur R. KhudaBukhsh. (ICWSM 2026)
- Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups. Rijul Magu, Arka Dutta, Sean Kim, Ashiqur R. KhudaBukhsh, and Munmun De Choudhury. (COLM 2025)
- Towards a Bipartisan Understanding of Peace and Vicarious Interactions. Arka Dutta*, Syed M. Sualeh Ali*, Usman Naseem, and Ashiqur R KhudaBukhsh. (IJCAI 2025)
- All You Need Is S P A C E: When Jailbreaking Meets Bias Audit and Reveals What Lies Beneath the Guardrails (Student Abstract) Arka Dutta, Aman Priyanshu, and Ashiqur R KhudaBukhsh. (AAAI 2025).
- Down the Toxicity Rabbit Hole: A Framework to Bias Audit Large Language Models with Key Emphasis on Racism, Antisemitism, and Misogyny. Arka Dutta, Adel Khorramrouz, Sujan Dutta, and Ashiqur R KhudaBukhsh. (IJCAI 2024).
- Classification of Cricket Shots from Cricket Videos Using Self-attention Infused CNN-RNN (SAICNN-RNN). Arka Dutta, Abhishek Baral, Sayan Kundu, Sayantan Biswas, Kousik Dasgupta, and Hasanujjaman. (CICBA 2023). [Springer Link]
Manuscripts
- Investigating Intersectionality in Large Language Models. Soumyajit Datta, Arka Dutta, Sujan Dutta, and Ashiqur R KhudaBukhsh. (In Review - arxiv version)
- How Can You Tell if Your Large Language Model Could Be a Closet Antisemite? A Framework to Bias Audit Large Language Models. Arka Dutta, Reza Fayyazi, Shanchieh Yang, and Ashiqur R KhudaBukhsh. (In Review - arxiv version)
- Counterbalancing Hate with Positivity: A Survey of Counterspeech. Ashiqur R KhudaBukhsh, Arka Dutta, Sarthak Roy, and Animesh Mukherjee. (In Review - arxiv version)