Handling inconsistent and uncertain legal reasoning for AI vehicles design

Yiwei Lu*, Yuhui Lin, Burkhard Schafer, Andrew Ireland, Lachlan Urquhart, Zhe Yu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

As AI products continue to evolve, increasingly legal problems are emerging for the engineers that design them. For example, if the aim is to build an autonomous vehicle (AV) that adheres to current laws, should we give it the ability to ignore a red traffic light in an emergency, or is this merely an excuse we permit humans to male? The paper argues that some of the changes brought by AVs are best understood as necessitating a revision of law’s ontology. Current laws are often ambiguous, inconsistent or undefined when it comes to technologies that make use of AI. Engineers would benefit from decision support tools that provide engineer’s with legal advice and guidance on their design decisions.This research aims at exploring a new representation of legal ontology by importing argumentation theory and constructing a trustworthy legal decision system.While the ideas are generally applicable to AI products, our initial focus has been on Autonomous Vehicles (AVs)
Original languageEnglish
Title of host publicationProceedings of the International Workshop on Methodologies for Translating Legal Norms into Formal Representations (LN2FR 2022)
EditorsKen Satoh, Georg Borges, Erich Schweighofer
Publication statusPublished - 20 May 2023

Publication series

NameLogic in Computing


Dive into the research topics of 'Handling inconsistent and uncertain legal reasoning for AI vehicles design'. Together they form a unique fingerprint.

Cite this