Information générale
XXIst International Congress of Penal Law 2024
Artificial Intelligence and Criminal Law
Tuesday, 25 June 2024 – Friday, 28 June 2024
Paris, France
We are pleased to invite you to the XXIst International Congress of the International Association of Penal Law (AIDP/IAPL) on Artificial Intelligence and Criminal Law which will take place in Paris on 25-28 June 2024.
This Congress is organised by the AIDP/IAPL and will bring together top legal experts, scholars, and professionals to discuss and exchange ideas on criminal law and artificial intelligence. The event is under the High Patronage of Mr Emmanuel MACRON, the President of the French Republic.
Various events of the Congress will take place in different locations in Paris.
Explore the Congress by visiting the official website:
Practical information:
*******
- Criminal law – in its broadest sense – cannot ignore anymore AI and related challenges. In-depth reflections on whether, to what extent and under which conditions the use of AI systems in criminal justice is compatible with the basic tenets of our systems cannot be postponed anymore
- A key question behind AI-related legal problems seems to be whether AI falls into some of the existing categories (according to the context: legal entities, children, child soldiers, weapons, even animals, etc.) or whether it is something so unprecedented to require new categories and principles?
- Due to the nature of the subject and the indisputable fact that ‘the technological developments have far outpaced legal or policy debates’ around it (Ferguson), a strong interdisciplinary approach is necessary: practitioners, scholars from other branches of law, experts in fields other than law (AI programmers, bioethicists, criminologists, scientists, crime analysts, etc.)
- General issue:
- What is AI? No universally accepted definition. The European Commission’s definition is a useful reference point:
- systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones …)
- Legislators shall keep pace with the scientific innovation. Criminal law literature is increasingly paying attention to the matter, since there is a strong need to conceptualise and address the several legal issues that AI poses:
- To what extent is a crime committed by an AI system attributable to a human being or to AI itself (if even possible)?
- How can AI systems help LEAs in preventing and combating crime?
- Can we rely on AI to adopt decisions in criminal law cases?
- The XXI International Congress of Penal Law should address these topics by taking into account the various aspects of criminal justice
- Section 1. Traditional Criminal Law Categories and AI: Crisis or Palingenesis?
- Can traditional criminal law categories of the General part (substantive criminal law)– especially actus reus, mens rea, and causation– apply to crimes committed by/through AI systems or do they need some (deep) rethinking to face the challenges ahead?
- F.i.: autonomous driving: if an accident happens, who is responsible? Questions:
- Can we envisage a form of direct liability of AI systems?
- Would it make sense to ‘punish’ a machine?
- If the AI system was programmed with the intent to kill/harm people, the person who programmed it should normally be responsible
- In less clear scenarios, things can be more complicated: who is the person responsible (manufacturer, programmer, user)? Can the human operator disengage the autonomous driving system? Can we apply the ‘perpetration-by-another’ or ‘natural probable consequence’ models?
- f the accident is the outcome of the machine learning process of the AI system, can we argue that this broke the chain of causation?
- Section 2: Old and New Criminal Offences: AI Systems as Instruments and Victims
-
Special part of substantive criminal law:
- 1) AI systems to commit ‘traditional’ crimes; and
- 2) ‘new’ AI-related crimes
- Traditional’ crimes by means of AI: for instance, drug trafficking, terrorism, market manipulation, and online fraud, especially ‘spear phishing’, i.e. ‘email or electronic communications scam targeted towards a specific individual, organization or business’ (Kaspersky) to steal data or install malware
-
‘New’ AI-related crimes:
- AI as a victim of new crimes (e.g., hacking an autonomous driving car)
- AI as an instrument of new crimes (e.g., creation/spreading of fake news)
- possible interactions between AI and cryptography, with a focus on those technologies that build on cryptography such as blockchain; this combination may raise complex questions as it may facilitate the commission of existing or new crimes, and require ad hoc criminal law provisions
-
-
Section 3. AI and Administration of Justice: Predictive Policing and Predictive Justice
-
Procedural law and administration of justice:
- 1) predictive policing; and
- 2) predictive justice
-
By processing enormous quantity of data, AI can make predictions about:
- where and when crimes are likely to be committed, and even by whom in some cases (predictive policing) – PredPol (US), Precobs (some EU States)
-
whether a suspect or defendant is likely to flee or commit further crimes. Thus, criminal courts can then deny bail or opt for harsh sentences (predictive justice) – COMPAS and the Loomis case of the Supreme Court of Wisconsin•Some questions to be answered:
- Are predictive policing instruments really useful to prevent crime?
- Do predictions violate human rights (e.g. presumption of innocence)?
- Is the outcome of predictive instruments neutral, objective, and unbiased?
- What is the impact of predictive methods on the administration of justice and the role of public authorities?
- Can AI replace criminal courts and juries?
-
-
Section 4: International Perspectives on AI: Challenges for Judicial Cooperation and International Humanitarian/Criminal Law
-
international implications of the use of AI systems. In particular, the impact of AI on:
- a) evidence gathering, also through the prism of international cooperation; and
- b) international humanitarian law (IHL) and international criminal law (ICL), with respect to the use of robots in war
- AI for law enforcement purposes: analysis of DNA or social media profiles, identify fake art works, facial recognition, locate events and places, etc.
-
Issues concerning the right to a fair trial with respect to AI evidence :
- Is the defendant able to challenge the way in which evidence was gathered?
- Will the use of AI be the end of the equality of arms principle?
- Similar questions even more complex in cross-border cooperation
- Are existing instruments of cooperation in criminal matters able to ensure exchange, admissibility, and use of AI-related evidence in a satisfactory way? Is there a need for a coordinated approach on the international level?
- On the other hand, can AI make judicial cooperation easier?
- Autonomous weapon system (AWS) (or killer robot): ‘a weapon system that, based on conclusions derived from gathered information and preprogrammed constraints, is capable of independently selecting and engaging target’ (Crootof)
- Ius ad bellum issue: because of ‘dehumanization’, do AWSs increase the possibility of future wars?
- Ius in bello issues: do AWSs increase the risks of violating the principles of distinction and proportionality? Do they cause superfluous injury or unnecessary suffering, and should therefore be prohibited?
- In case of international crimes by AWSs, who is responsible? Risks of ‘organized irresponsibility’ (Wagner)
- Is there a need for an international approach? E.g. absolute ban on the use of AWSs in war, at least as long as their use is unlikely to be compliant with the core principles and rules of IHL, or an international agreement that regulates the development and use of AWSs?
-