Publications
- Towards a Bipartisan Understanding of Peace and Vicarious Interactions. Arka Dutta*, Syed M. Sualeh Ali*, Usman Naseem, and Ashiqur R KhudaBukhsh. (International Joint Conference for AI, AI for Social Good Track, IJCAI 2025).
- All You Need Is S P A C E: When Jailbreaking Meets Bias Audit and Reveals What Lies Beneath the Guardrails (Student Abstract). Arka Dutta, Aman Priyanshu, and Ashiqur R KhudaBukhsh. (AAAI Conference on Artificial Intelligence, AAAI 2025). [PDF] (Oral Presentation, Acceptance rate~ 12%)
- Down the Toxicity Rabbit Hole: A Novel Framework to Bias Audit Large Language Models. Arka Dutta*, Adel Khorramrouz*, Sujan Dutta, and Ashiqur R KhudaBukhsh. (International Joint Conference for AI, AI for Social Good Track, IJCAI 2024). [PDF](Poster, Acceptance rate~ 15%)
- Classification of Cricket Shots from Cricket Videos Using Self-attention Infused CNN-RNN (SAICNN-RNN). Arka Dutta, Abhishek Baral, Sayan Kundu, Sayantan Biswas, Kousik Dasgupta, and Hasanujjaman. (CICBA 2023). [Springer Link]
Manuscripts
- A Large Scale Social Web Audit of AI Generated Text Detection Systems. Arka Dutta, Utkarshani Jaimini, Utkarsh Bhatt, Sara Shree Muthuselvam, Amitava Das, and Ashiqur R. KhudaBukhsh. (In Review - arxiv version)[PDF]
- Navigating the Rabbit Hole: Emergent Biases in LLM-Generated Attack Narratives Targeting Mental Health Groups. Rijul Magu, Arka Dutta, Sean Kim, Ashiqur R. KhudaBukhsh, and Munmun De Choudhury. (In Review - arxiv version)[PDF]
- How Can You Tell if Your Large Language Model Could Be a Closet Antisemite? A Framework to Bias Audit Large Language Models. Arka Dutta, Reza Fayyazi, Shanchieh Yang, and Ashiqur R KhudaBukhsh. (In Review - arxiv version)
- Counterbalancing Hate with Positivity: A Survey of Counterspeech. Ashiqur R KhudaBukhsh, Arka Dutta, Sarthak Roy, and Animesh Mukherjee. (In Review - arxiv version)