
Co-ordinator: Prof. Vincenzo Pacillo – Dr. Elisabet Bolarin
The Religions, Law, and AI cluster of the ORFECT Center at the University of Modena and Reggio Emilia explores the transformative impact of artificial intelligence on the legal regulation of religious life, the protection of fundamental rights, and the emerging challenges at the intersection of digital technologies, belief systems, and normative pluralism.
Informed by the EU’s AI Act and its risk-based regulatory approach, the cluster develops both critical analysis and practical guidance on how AI systems intersect with religious freedom, anti-discrimination law, and the broader cultural and ethical frameworks embedded in religious traditions.
Objectives
• To examine how AI technologies affect the expression and regulation of religious identities, particularly in the domains of biometric surveillance, algorithmic bias in employment and credit scoring, and automated decision-making in healthcare or education.
• To assess the compatibility between AI governance principles (such as transparency, fairness, and human oversight) and the normative expectations of different religious communities, with particular attention to non-discrimination, privacy, and dignity.
• To develop intercultural legal models that prevent the use of AI as a tool of symbolic or structural violence, particularly in workplace settings, public space governance, and digital media.
• To explore the theological and ethical responses of religious traditions to AI, focusing on notions of human agency, conscience, moral responsibility, and the sacred in the era of machine reasoning.
The cluster adopts an interdisciplinary methodology, combining:
• Legal analysis, with a focus on European human rights law, the AI Act (Regulation 2024/1689), and national regulations on religious freedom.
• Comparative religion, to understand how faith traditions conceptualize knowledge, truth, and representation in the digital domain.
• Ethical theory, to reflect on the implications of algorithmic decision-making on conscience and human dignity.
• Case study research, particularly on high-risk AI applications affecting religious expression (e.g., religious profiling in hiring, bias in medical AI, surveillance of religious gatherings).
Key Areas of Inquiry
1. AI and Religious Discrimination
• Exploring cases like algorithmic exclusion in hiring processes or loan approvals based on religiously coded names or practices (cf. AI Act Articles 6, 10, and 13)  .
• Analyzing the role of conformity assessments and fairness audits to prevent systemic bias.
2. Biometric Surveillance and Religious Assemblies
• Investigating the legality and ethicality of facial recognition at places of worship or during public religious events (cf. Article 5(1)(d) AI Act)  .
• Emphasizing the need for judicial oversight and safeguards against chilling effects on freedom of religion and assembly.
3. Algorithmic Profiling and the Right to Conscience
• Evaluating how opaque AI systems may undermine the moral autonomy of individuals by excluding religious reasoning from public justification.
• Connecting this concern to broader debates on pluralism, dissent, and the epistemic status of belief in law and society.
4. Human Oversight and Theological Anthropology
• Linking the requirement for human oversight in high-risk systems (Article 14 AI Act) to religious understandings of moral responsibility and human distinctiveness.
• Reflecting on how religious traditions contribute to a critique of the illusion of algorithmic omniscience.
5. Regulatory Innovation and Intercultural Law
• Proposing new legal instruments (e.g., based on the Modena Charter for Religious Freedom in the Workplace) that incorporate religious perspectives into AI governance, particularly in multicultural organizational settings.
• Supporting the development of intercultural regulatory sandboxes that respect symbolic pluralism while ensuring compliance with EU norms.
The cluster is committed to ensuring that AI development and deployment respect religious diversity and promote intercultural justice. It contributes to:
• Academic research and publications.
• Policy recommendations to national and EU institutions.
• Dialogues with religious communities and civil society.
• Training modules for AI developers, legal professionals, and educators.
By placing religious freedom at the center of the AI ethics and law debate, the cluster seeks to reimagine technological innovation not merely as a technical endeavor but as a site of moral and cultural negotiation.