Young Digital Law 2023

The 3rd Conference of the Research Network Young Digital Law is jointly hosted by the Department of Innovation and Digitalisation in Law and the Research Platform Governance of Digital Practices in cooperation with the Research Group Security and Privacy.


Programme

/

  • Wednesday, July 5
    Time
    Speaker
    Title

    09:30

    09:45

    YDL 2023 Team

    Conference Opening

    09:45

    10:30

    Nikolaus Forgó

    Keynote: What is Young about Young Digital Law?

    10:30

    11:00

    Coffee Break

    Block I: Human-AI Relations

    11:00

    11:30

    Jan Horstmann

    Brauchen wir ein Recht auf automatisierte Entscheidung?

     

     

    Im „Zeitalter der Algorithmen“ scheinen automatisierte Entscheidungen in immer mehr Lebensbereichen möglich. In der rechtlichen Reaktion darauf geht es bislang vor allem um die Einhegung automatisierter Systeme zum Schutz der Rechte und Interessen derjenigen, die einer Entscheidung eines solchen Systems unterworfen sind. Das Verbot automatisierter Einzelentscheidungen in Art. 22 DSGVO schreibt letztlich ein Recht auf menschliche Befassung mit der Entscheidung fest. Auch die ethischen Leitlinien der hochrangigen Expertengruppe für KI der EU-Kommission enthalten den Vorrang menschlichen Handelns.  Jedoch bestehen Anzeichen dafür, dass automatisierte Entscheidungen denjenigen menschlicher Akteur:innen im Hinblick auf relevante Belange wie Genauigkeit, Effizienz, Transparenz und Fairness nicht nachstehen oder sie gar übertreffen könnten. Der Grundsatz menschlicher Letztentscheidung sieht sich daher der kritischen Frage ausgesetzt, ob Vorbehalte gegenüber automatisierten Entscheidungen zu rechtfertigen sind, wenn Maschinen faire und effiziente Ergebnisse erzielen.  Zugespitzt: (Wann) sollte maschinellen gegenüber menschlichen Entscheidungen rechtlich der Vorrang eingeräumt werden? Bislang wird diese Frage nur gestreift oder vorsichtig gestellt und selten explizit näher untersucht.  Differenzieren wir aber die menschliche Subjektstellung im Verhältnis zwischen Mensch und Maschine neu aus, so geschieht dies auch durch die Bestimmung des Einflusses, der dem Menschen bei wichtigen Entscheidungen vorbehalten bleiben soll. Dazu gehört die grundlegende Debatte darüber, ob dem Schutz vor der Macht der Maschine ein Schutz vor menschlicher Willkür, hier in Form eines Rechts auf maschinelle Entscheidung, zur Seite gestellt werden sollte. Ausgehend von der Beschreibung der soziotechnischen Ausgangslage und ihrer Analyse unter normativen Gesichtspunkten können die Grundlagen der Herleitung eines Rechts auf automatisierte Entscheidungsfindung in den Blick genommen werden. Anschließend werden beispielhaft mögliche Anknüpfungspunkte de lege lata erörtert, an denen sich auch die Probleme eines Rechts auf automatisierte Entscheidung aufzeigen lassen.

    11:30

    12:00

    Eva Beute, Anna-Katharina Dhungel

    KI-gesteuertes Recht: Eine interdisziplinäre Perspektive auf den Einsatz bei der Gesetzgebung und -auslegung

     

     

    Künstliche Intelligenz ist inzwischen allgegenwärtig, dennoch gibt es nach wie vor zahlreiche Mythen rund um den schillernden Begriff und die dahinterstehenden Technologien. Die Vorstellungen, Erwartungen und Ängste über das heutige und zukünftige Potenzial von KI sind divers. Nicht selten geht KI gedanklich mit Science-Fiction Szenarien und der Befürchtung einher, sie würde uns Menschen überflüssig machen. Vor allem bei der staatlichen Aufgabenwahrnehmung ist die Abkehr von menschlicher Entscheidung hin zu einem algorithmischen Entscheidungsträger mit großer Sorge vor einem Kontrollverlust und demokratietheoretischen Bedenken verbunden. Aus dem Demokratieprinzip folgt die Legitimationsbedürftigkeit jeglicher Ausübung öffentlicher Gewalt, weshalb sich die Frage stellt, ob eine Übertragung von Entscheidungen auf autonome Algorithmen überhaupt als Hoheitsgewalt demokratisch legitimiert sein kann. Zunächst zu klären ist jedoch, wie viel KI es im demokratischen Verfassungsstaat in absehbarer Zeit wirklich geben wird: Was ist technisch möglich, für welche Bereiche ist KI überhaupt geeignet und vor allem, inwiefern ist eine KI-Unterstützung von der entsprechenden Berufsgruppe überhaupt gewünscht? Dies möchten wir im Hinblick auf den Einsatz von KI im Gesetzgebungsverfahren und in der Rechtsprechung beleuchten. Durch die Betrachtung dieser beiden Anwendungsfälle möchten wir die Befürchtungen vor dem „Roboterrichter“ und der „Gesetzgebungsmaschine“ relativieren. Die genannten Anwendungsgebiete werden im Anschluss auch im Hinblick auf rechtliche Fragen analysiert. Insgesamt soll der Vortrag somit einen praxisnahen Einblick zur Entwicklung des KI-Einsatzes für Gesetzgebungsverfahren und -auslegung in Deutschland vermitteln und Schlagwörter wie etwa „Herrschaft der Maschinen“ kritisch hinterfragen.

    12:00

    12:30

    Moritz Griesel, Tizian Matschak

    Legal Requirements for Human Oversight within the AI-Act-Proposal

     

     

    In April 2021, the European Commission published the Proposal for an Artificial Intelligence Act (AI-A-P), introducing a risk-based approach. The requirements for high-risk AI systems are laid down in Chapter 2. Among others, Art. 14 (1) AI-A-P stipulates that these systems shall be designed and developed in such a way that natural persons can effectively oversee them during the period in which the AI system is in use. In particular, the human supervisor must understand the AI system's capabilities and limitations and supervise it appropriately. However, it remains unclear how these intensive regulatory requirements should be met in an application that is explicitly built to provide economic efficiency by autonomous decision-making. The presentation will take up on this observation and will give an overview on the currently debated legal requirements for human oversight.

     

    12:30

    13:00

    Ann-Kristin Mayrhofer

    Enterprise Liability for Human-AI-Decisions: A multidisciplinary approach for identifying principals’ duties of care

     

     

    AI-Systems are improving constantly. However, it is unlikely that AI-Systems will completely replace humans any time soon. Rather, AI-Systems and humans will increasingly work side by side. The integration of Human-AI-collaboration holds great promise for many enterprises. The idea is to keep the human “in the loop” in order to fully exploit both AI and human potential. However, such collaboration comes with its own specific risks. Damage may still occur, e.g., when the Human-AI-Decision concerns medical treatment or the manufacturing of dangerous products. This, of course, raises the question of the organisation’s liability. In jurisdictions where the liability of the principal requires fault, as it is generally the case in Germany, the answer to this question depends largely on the scope of the principal’s duties of care. These duties of care will be explored in this contribution. It will also include a brief look at the European Commission’s Proposal for an AI Liability Directive of 28 September 2022. A multidisciplinary approach will be used to identify the safety measures that can and must be taken to manage the specific risks associated with Human-AI-Decisions. The intention is not to provide an exhaustive list of the principals’ duties of care but to establish a framework and to illustrate the value of input from other disciplines. By looking at studies conducted by different social scientists, typical problems of Human-AI-Decisions and possible mitigating factors will be identified. There appear to be three main phenomena which prevent humans from being successful in their role in Human-AI-Decisions: Automation Complacency, Automation Bias and Algorithmic Aversion. Scientist have proposed possible measures to deal with these phenomena. However, in order to avoid fault-based liability, the principal generally is not obliged to avoid every risk by taking every possible measure. Therefore, this contribution also seeks to shed light on the limits of the principal’s duties of care. Here, too, input from other disciplines, in particular from economic approaches, is generally found to be fruitful.

    13:00

    14:00

    Lunch Break

    14:15

    15:00

    Edgar Weippl

    Keynote: Using Smart Contracts for Minimizing Transaction Costs of Illegal Activities

    15:00

    15:30

    Coffee Break

    15:30

    17:00

    Mixed Panel

     

     

    Power Circuits: Intersections of Digital Infrastructures and Constitutions

     

     

    Chair

    Margarita Boenig-Liptsin

    Panelists 

    Kebene Wodajo, Raffaela Kunz, Angelo Golia

     

     

    What do we know about how digital infrastructures and constitutions relate? Both recede into the background of everyday life while being foundational discursive and material supports and blueprints for social relationships, agency and power. Both concepts have also been turned into theoretical frameworks for the analysis of the mutual interplay between technology and forms of governing life: infrastructure studies and (bio)constitutionalism, respectively. Infrastructures and constitutions, as concepts and as studies/"-isms," have been doubly generative to scholars of digital technologies and the law to uncover similarities, differences, and interplay and reconfiguration between legal and technical forms of ordering societies. For example, some scholars of infrastructure studies have described constitutions as a special kind of infrastructure, while also saying that constitutions require certain infrastructures in order to function (Edwards 2003). Others, following Lawrence Lessig's argument that "code is law," have observed the way that computer hardware and software comprise a force for governing human behavior and relations in both on-line and physical world similar to constitutions (Lessig 1999; Van der Meerssche 2021; Kingsbury and Maisley 2021). Work in (bio)constitutionalism in STS identifies the mutual interplay between constitutional orders, with their explicit as well as informal arrangements of the balance of power, rights, and responsibilities, and scientific and technical ways of knowing and ordering the world (Jasanoff 2011; Hurlbut et al 2020). STS scholarship on constitutionalism recognizes the profound role that scientific and technical infrastructures, especially when inclusive of knowledge, play in the gradual transformation of core constitutional values, like privacy and freedoms, as well as the concept of the human subject. The legal constitutionalist scholarship also highlights the role of scientific and technical infrastructures in the blurring line between private and public power (De Gregorio 2022) and competing values in the ordering of technologically mediated society. This panel brings together scholars of the digital, society, and the law who work with concepts and theories of digital infrastructure and constitutionalism. Panelists will discuss the ways in which the intersection of these concepts and theories has been conceived of in previous work, what fruitful insights about the relationship of digital technologies and the law have come out of these crossings, and what are the exciting new directions of inquiry enabled by thinking together these two distinct yet resonant concepts and theoretical approaches. In particular, the panelists will consider the opportunities and challenges of these concepts and approaches for thinking about the transforming human subject, their agency, rights and responsibilities in digital societies.

    17:30

    21:00

    Networking & Rooftop-Bar supported by Freshfields, Peregringasse 4, 1090 Wien

  • Thursday, July 6

    Time

     

    Speaker

    Title

    09:15

    10:00

    Iris Eisenberger

    Keynote

    10:00

    10:30

    Coffee Break

    Block II: Who is the Subject?

    10:30

    11:00

    Kinan Sabbagh

    The User as Subject of Digitality? About the User's Position between Civil and Public Law

     

     

    The legal system speaks of users in various contexts: In abstract terms, it is to be described that natural persons generate an immaterial added value through the use of an object (e.g. a thing, service or application) . This added value increasingly also consists of the realization of public law interests: Fundamental rights increasingly want to be exercised digitally and in relation to a third party - Furthermore, the impairment of constitutionally guaranteed legal positions increasingly emanates from private actors with de facto regulatory competence and technological independence. In order to grasp the significance of digital exercise of fundamental rights, a new conception of freedom protection in at least a supranational dimension is required: the interpretation of this new relationship can only succeed with knowledge of the relevant subject and the interests immanent to this position. Furthermore, the ubiquity of online interactions and the shift of everyday life to the Internet could also have given rise to an original public-law category: "the user, the user community". In this sense, user status describes a concrete digital position in a specific relationship to various structural providers. Relevant digital regulation seems to have increasingly discovered the concept of the user for itself: consequently, the natural or legal person is addressed - an understanding that is plausible in view of the fundamental rights dogmatics of anthropocentric constitutional orders, but by no means compelling: in fact, users are neither necessarily human, nor do they belong to a specific state. In light of this, the conference contribution aims to abstract and systematize the position of the user on the basis of immanent interests. It will be determined whether, in view of a conglomerate of affected fundamental rights, it can be a public-law category and what conclusions arise from this perspective for the protection of digital freedom.

    11:00

    11:30

    Suad Salihu

    Das digitale Subjekt: Grundlage für die Personalisierung des Rechts

     

     

    Moderne westliche Rechtsordnungen beruhen nicht nur auf der Vorstellung vom Menschen als ein freies und autonomes Wesen,  sondern auch auf der Annahme, dass das Recht erst dann gerecht ist, wenn es aus allgemein-abstrakten Gesetzesbestimmungen besteht, die alle gleich behandeln. Im Zeitalter der Datenanalyse und der künstlichen Intelligenz wird jedoch an beiden, sowohl am selbstbestimmten Menschen als auch an der allgemein-abstrakten Rechtsordnung, gerüttelt. So konstruieren computergesteuerte Prozesse neue digitale Subjektformen, die sich nicht nur vom Menschen im Sinne von Leib und Seele, sondern auch vom herkömmlichen, abstrakten Begriff des Rechtssubjekts unterscheiden. In meinem Beitrag sollen Aspekte des Konstruktionsprozesses des digitalen Subjekts herausgearbeitet werden. Hierfür wird auf die Knotenpunkttheorie abgestellt, wonach das Subjekt durch Rechenprozesse in der Verbindung von Links und Likes, Matches und Tracks gebildet wird. Vorher existiert das digitale Subjekt nicht. Erst durch seine algorithmische "Aktualisierung" erlangt es ein – wenn auch nur sehr beschränktes – digitales "Bewusstsein". Ein Knotenpunkt allein ist bereits in der Lage ein digitales Subjekt zu konstruieren, obwohl es nur einen sehr begrenzten Teil seiner Interessen berücksichtigt.  Dass das digitale Subjekt "nie in seiner Ganzheit in den Blick genommen wird, sondern als ein modulares singularisiert wird, also als etwas, das sich aus diskreten Bestandteilen zusammensetzt," ist gerade sein Kennzeichen. Im Aktualisierungsmoment erfolgt zugleich die Personalisierung, bzw. die Zuordnung der techno-kulturellen Unterschiede und Gemeinsamkeiten zum Menschen draussen. An der Aktualisierung, die mit einer Personalisierung einhergeht, entwickeln soziale Systeme vermehrt ein Interesse. Die Wirtschaft reagiert darauf mit personalisierten Preisen und individuellen Produktangeboten, die Medizin mit personalisierten Therapiemethoden und das Recht – so die hier vertretene These – mit individuell-konkreten Bestimmungen. Das Recht verallgemeinert gemäss einer dezidierten Meinung ohnehin nur, weil dem Gesetzgeber bis anhin die Informationen für eine personalisierte Normsetzung fehlte.  Mit neuen digitalen Methoden der algorithmischen Verarbeitung von Daten verfügt der Gesetzgeber nun über die Möglichkeit, personalisierte statt «one-fits-all»-Regelungen zu entwerfen. Damit stellt mein Beitrag einen Versuch dar, die Annahmen von Rechtssubjekt und Rechtsnorm vor dem Hintergrund der Digitalisierung zu hinterfragen.

    11:30

    12:00

    Rüya Tuna Toparlak

    He, She, It: Addressing the Social Valence of Robots under Legal Subjectivity

     

     

    Social robots interact with humans in various contexts. They are able to recognise, interpret, mimic, and respond to human emotions. Their social valence makes them appear to us like social actors, more than any other previous technology. Said effect carries potential benefits as well as risks that have already started to create social, cultural, economic, and legal tensions. This effect is so prominent and systematic that researchers have argued for a new ontological category for robots, somewhere between an object and an agent. Said categorisation is of particular interest to law as a system, that constructs the object/subject paradigm at its centre. This paper inspects the legal tensions caused by the social valence of robots and how the law should address them, particularly if a transformation should happen in the legal construction of the subject/object. With that, the paper intends to contribute to the discussion on how the prevailing and upcoming legislation picture human-machine relationships in the third annual Young Digital Law Conference.

    12:00

    13:00

    Lunch Break

    Block III: Why do we trust?

    13:00

    13:30

    Yann Schoenenberger, 
    Yann Conti

    Implementation of Digital Data Erasure: an Interdisciplinary Perspective

     

     

    This contribution provides an interdisciplinary perspective on the issue of data erasure by comparing the legal notion of data erasure to its technical definition as well as its applicability to digital data. Our analysis aims at identifying potential technological biases in the law-making of the current data protection policies. We lay out an analysis of the notion of data erasure as enshrined in the “right to erasure” as it currently stands in data protection laws. We focus on both the General Data Protection Regulation and relevant aspects of the Revised Swiss Data Protection Act which comes into force in September 2023. In that regard, we highlight how both laws define erasure and, consequently, how legal scholars, data protection authorities as well as tribunals apprehend its implementation. Considering the above, we compare data erasure in the context of digital data to the destruction of information in the analogue sense and discuss how the fundamental ways in which digital data is stored, processed, and transferred require their own mental framework. We present common ways in which digital data pose a challenge when erasure is required by touching on aspects of copying, anonymization, and territoriality. We conclude by opening up the discussion to ways in which current legislation might frame digital data in a way that allows data controllers to de facto retain digital data and exploit personal data while still complying with the law when data erasure is required. With this in mind, we ask whether specific changes or an overhaul of the current legal practice should be considered.

    13:30

    14:00

    Rachel Griffin

    Procedural Fetishism in the Digital Services Act

     

     

    The content moderation practices of dominant social media platforms have raised widespread concern about arbitrary censorship. Evidence suggests that they operate highly unequally, disproportionately censoring marginalised users, while inadequately protecting them against hate speech and harassment. The EU’s main response to such issues is the 2022 Digital Services Act (DSA). As regards the regulation of content moderation, it primarily focuses on empowering individuals to challenge moderation of their content (e.g. by requiring platforms to notify users of decisions and allow them to appeal). Analysing the DSA from a feminist perspective, I describe this approach in terms of ‘procedural fetishism’, and develop a critique on three levels. First, existing evidence as to how such systems work in practice suggests they will have little practical impact, and are likely to disproportionately benefit more privileged users. Second, even ignoring these practical limitations, focusing on procedural fairness is normatively unsatisfactory as a way of regulating content moderation. Reviewing individual decisions cannot address the higher-level decisions and systemic biases that produce unreliable and discriminatory moderation. Moreover, the DSA allows platforms discretion over substantive policies, provided they are applied in a procedurally fair way—including policies that prioritise commercial gain over public interests and demonstrably disadvantage marginalised communities. Third, by diverting resources within industry and regulatory agencies away from potentially more effective interventions, and by making platforms’ existing content moderation systems appear more legitimate, the DSA’s fetishisation of procedure over substance could actively exacerbate or reinforce unaccountable and unfair moderation practices. I conclude by identifying some elements of the DSA framework with the potential to enable more systemic reform of social media content moderation, and thereby more effectively address arbitrary and unjust censorship.

    14:00

    14:30

    Trisha Prabhu, Jonathan Zittrain, Edmond Awad, Will Marks

    Information Fiduciaries: An Exploration of Online Users’ Expectations and Interests

     

     

    Certain businesses and professionals—such as doctors and lawyers–who hold specialized knowledge or power, are recognized as fiduciaries, and as such have a duty of loyalty to their patients and clients. Recently, it has been argued that digital businesses, especially those which deal “not in money but in information,” should hold this same duty of loyalty to their customers, as “information fiduciaries.”. Should these duties be enacted in law, what should they look like? While it is mostly clear what is expected of doctors, the reality remains hazier for online information platforms. In this project, we make progress on these and other questions, exploring would-be beneficiaries’ expectations of how online platforms should conduct themselves – either to then correct their misapprehensions of loyalty, or to help establish a floor of loyalty that a fiduciary must render to its users. For this purpose, we have performed a series of exploratory pilot studies on Amazon’s Mechanical Turk platform (total N=435 participants). We presented participants with several hypothetical scenarios involving companies that are candidates for taking on fiduciary duties, asking each participant whether they would consider a company’s data practice in a particular scenario to be “fair.” In our first experiment, we observed that the participants found practices more fair if they believed those practices were already being implemented by other companies. We are now developing a website designed as a serious game to collate valuable insights about which factors along different stages of the data lifecycle affect respondents’ judgments of fairness.

    14:30

    15:00

    Coffee Break

    15:00

    16:45

    Workshops

     

     

    Connor Hogan

    Assessing the Public Value of Data Use

     

     

    As data becomes an ever more present feature of our daily lives, it's increasingly important to prioritize the value that a given data use creates for society. At present, data users are able to make profits at the cost of people and communities. Individuals harmed by data use often lack legal recourse, either because they cannot prove who and what caused the harm, or because no law was broken. By necessitating the prevention of harm and centering public value, the data solidarity framework helps to ensure that the benefits and costs of data use are borne collectively and fairly. But how can we ensure that these principles are reflected in digital law? Data solidarity requires that data use that creates considerable public value receive more public support, such as by streamlining legal processes. Additionally, it requires that the full force of the law be used to prohibit data use that poses risks to individuals or communities. Finally, individuals harmed by data use must have easy and effective access to legal remedies. This workshop will introduce attendees to the concept of public value in the social sciences through data solidarity, and the public value assessment tool which has been developed by the Digitize! Project. Attendees will have an opportunity to assess the public value of given instances of data use themselves, and learn how to incorporate the data solidarity framework into their own research and practice, to ensure that public value is enshrined in the next phase of digital law.

     

     

    Paola Lopez

    ChatGPT: The Good, the Bad and the Ugly 
    (in German)

     

     

    Seit seiner Veröffentlichung bekommt ChatGPT recht viel mediale Aufmerksamkeit. Einige argumentieren, dass das Erstellen von Text positiv revolutioniert wird – andere fürchten eine Erosion verschiedenster textbasierter Institutionen wie etwa Zeitungen oder Beurteilungsmodi von Bildungsinstitutionen. Das, was ChatGPT kann, ist in wesentlicher Weise durch seine mathematischen Eigenschaften abgesteckt. Dieser interaktive Workshop befasst sich in drei Teilen mit den mathematischen Charakteristika von ChatGPT: Im ersten Teil, „The Good“, beschäftigen uns mit der grundlegenden mathematischen Funktionsweise von ChatGPT und beleuchten, warum und inwiefern ChatGPT viel besser funktioniert als seine Vorgängermodelle, etwa GPT-3. Im zweiten Teil, „The Bad“, beschäftigen wir uns mit drei mathematischen Charakteristika, die das Anwendungspotenzial von ChatGPT in wesentlicher Weise einschränken. Im dritten Teil, „The Ugly“, setzen wir uns mit den unsichtbaren menschlichen und planetaren Kosten solcher Sprachmodelle auseinander. Zum Schluss blicken wir auf den medialen Diskurs um ChatGPT. Teilnehmer*innen benötigen für den Workshop den Zugang zu einem Browser, zum Beispiel via Smartphone oder Laptop.

     

     

    Alexander Nussbaumer, Kai Erenli, Christian Gütl

    Co-Creation Workshop: Legal Aspects of Open Web Search Engines

     

     

    The workshop intends to explain the technology of web search engines and to investigate relevant legal aspects based on the analysis of the technical components, processes, and data flows. Web search engines have become extremely important in our modern society, as they are enablers and to some extent gatekeepers to finding information on the Web. In the Western world, there are only two large independent Web search engines available (Google and Bing), which are owned by private companies. Their inner workings are non-transparent and restrictive for users. In contrast, the Horizon Europe research project OpenWebSearch.eu aims to develop an open web search solution that is open for developers and transparent to end-users. Making the search process understandable and building trust in the technology is a key goal of this project. These requirements enable concepts for integrating ethical and legal aspects in the technology. Our interactive workshop will take up this opportunity by discussing legal aspects and potential law-by-design ideas. First, the concept and technology of open web search engines will be explained with illustrative figures and understandable descriptions that do not require detailed technical knowledge. Second, in small working groups relevant legal aspects will be identified and potential solution ideas will be elaborated. Finally, the overall method of decomposing a technology and analysing each component will be discussed regarding its suitability for identifying potential legal problems in a new digital technology and for avoiding biases in law making. By this, the workshop participants will learn about the technical background of search engines, and they will experience a method of legally analysing a new and probably unknown digital technology.

     

     

    Sebastian Schrittwieser, Edgar Weippl

    Introduction to Privacy-Enhancing-Technologies

     

     

    With the increasing use of digital technologies, the issue of privacy has become more critical than ever. In order to ensure the protection of individual privacy, it is essential for lawmakers to have a fundamental understanding of privacy-enhancing technologies (PETs), how they are used in today's digital environments like smartphone messaging, and how legislative proposals such as for lawful interception (backdoors, data retention, etc.) and interoperability can affect them in negative ways. The workshop will be conducted in an interactive format, using presentations and hands-on activities. Participants will engage in practical tasks, such as identifying PETs in widely used web services and discussions on privacy implications of current legislative proposals on these services and their implemented PETs.

     

     

     

    18:00

    20:00

    Public Roundtable

    in cooperation with the Austrian Federal Ministry of Education, Science and Research

     

     

    Open Science: Legal Framework and Practical Challenges in the Digital Age

     

     

    Panelists

     

    Barbara Sanchez Solis, 
    Head of Center for Research Data Management, Technical University of Vienna

    Michael Strassnig, 
    Deputy Managing Director of Vienna Science and Technology Fund GmbH (WWTF) & Programme Manager, research platform “Registerforschung”

    Petra Schaper Rinkel, 
    Professor of Science and Technology Studies of Digital Transformation, Director Idea Lab - The Interdisciplinary Digital Lab of the University of Graz

    Ronald Maier, 
    Vice-Rector for Digitalisation and Knowledge Transfer, University of Vienna

     

    Hosted by

    Katja Mayer, 
    Research Platform Governance of Digital Practices, University of Vienna

    Žiga Škorjanc, 
    Department of Innovation and Digitalisation in Law, University of Vienna

  • Friday, July 7
    Time
     
    Speaker
    Title

    09:00

    09:45

    Barbara Prainsack

    Keynote: The Bias of Bias: The Political Economy of Digital Practices

    09:45

    10:00

    YDL 2023 Team

    Announcement of YDL 2024

    10:00

    10:30

    Coffee Break

     

    Block IV: Anti-Bias and Discrimination

    10:30

    11:00

    Fatma Sümeyra Doğan

    Digital Discrimination in Healthcare and European Health Data Space Proposal

     

     

    Digital transformation in healthcare systems gained an immense pace as a result of the global pandemic. AI-based systems have one of the biggest roles in this process, as they can be used in endless possibilities. Diagnostic systems and patient care and management could be listed as different types of applicability of AI-based systems in healthcare. However, rapid developments bring forth numerous and solid concerns. Digital discrimination or algorithmic discrimination is one of the leading apprehensions. Discrimination in this sense can be explained as having biased decisions from AI-based systems against the underrepresented groups in the datasets. To ensure a more secure environment for health data processing and promote innovation, the European Health Data Space proposal (the Proposal) was introduced in May 2022. The proposal has numerous provisions related to AI systems yet would they bring a solution to discrimination issues in healthcare is open to discussion. To make an assessment in this regard, when it is looked into the Proposal, the word ‘bias’ takes place only in one article. It is mentioned as an evaluation criterion in the framework of qualifying and labelling datasets in Article 56. The importance of inspecting datasets for potential biases cannot be denied however a similar evaluation should have been forced for the other parts of the AI-based system application. In this study, it is aimed to discuss the sufficiency of the Proposal to overcome digital discrimination in healthcare and relay suggestions with the aim of improvement of the Proposal.

    11:00

    11:30

    Felicitas Rachinger

    Platform Practice and Non-Discrimination: Conceptions of Equality, Systemic Risks, and Data Bias in the DSA

     

     

    The fact that the use of automated decision-making systems often does not lead to neutral decisions, but can transfer mechanisms of exclusion into the digital space and in some cases even reinforce them, is now widely recognised and empirically confirmed. Regarding online platforms, the EU legislator seems to recognize the issue and repeatedly emphasises the relevance of "non-discrimination" in the Digital Services Act (DSA). The DSA is cautious with specific statements on the subject, which is why the presentation will first be devoted to the underlying conceptions of equality. This concerns in particular the question of whether the DSA's understanding of equality goes beyond a purely formal one and also takes into account structural and intersectional levels of discrimination. A first indication is provided by Art 34 DSA, wherein the EU legislator recognises discrimination as a systemic risk. In the context of the risk assessment of very large online platforms, the EU legislator also recognises the discriminatory potential of digital technologies and explicitly refers to "the design of […] recommender systems and any other relevant algorithmic system", "content moderation systems" and "systems for selecting and presenting advertisements". Discrimination does not always occur due to technical design: it is often due to the data with which the applied systems work, for example if they do not sufficiently reflect diversity or social bias. Against this backdrop, and closely related to the explanations on the understanding of equality, the presentation finally approaches the question of which phenomena of digital discrimination and bias have been taken into account by the EU legislator and are covered by provisions on "non-discrimination".

    11:30

    12:00

    Simona Stockreiter

    The ‘Governance Turn’ in EU Digital Policy

     

     

    In my PhD project I study the shift of “regulatory orientations” and “institutional design choices” of EU regulatory governance in the field of digital policy. It is divided into three parts: In the first part, I argue that a general proliferation of “new governance instruments” has taken place in the EU’s regulatory policy due to the increasing complexity, interrelatedness, public salience, and uncertainty of regulatory issues. Such instruments are characterized by the rise of innovative, flexible governance architectures (co-regulation, multi-stakeholder approach, etc.) and a simultaneous strengthening of independent institutions as regards the policymaking and the implementation phases (strong role of the European Commission, oversight boards, agencies, standardization bodies). I argue that regulatory design choices can be generally linked to three different regulatory regimes: “deregulatory regime”, “evidence-based technocratic regime” and “civic-republican regime”. In the second part, I apply this toolbox on EU digital regulatory governance, drawing mainly on a longitudinal analysis of the EU Commission strategies and main legislation in the area of content and data, as well as on expert interviews with high-level officials. I suggest, the regulatory regimes shifted from 1. a deregulatory approach (starting with the Digital Agenda 2000); to 2. increasing efforts to balance market-liberalisation and social welfare goals (marked by the Digital Single Market Strategy in 2015); to 3. increased focus on ethics, interrelatedness with sustainability goals, digital sovereignty, EU fundamental values and common goods (beginning with the New Digital Strategy 2020). It will be of great interest to understand to which regulatory regime the second and specifically the third phase of digital regulatory governance can be linked. I argue that it in certain cases, and specifically in the case of the AI Act, an “evidence-based technocratic regime” with deregulatory tendencies can be detected, due to the fact that AI is regulated in the context of the “product safety regime”. Against this background, it can be asked whether the “governance turn” points to a general undermining of a “politicized public sphere” in (EU digital) policy making and to the general limitations of EU regulatory policy.

    12:00

    12:30

    Michal Vosinek, Ondřej Woznica

    Legislative Bias and RIA: Case Study of Article 17 CDSM in Czech Republic, Public Choice Theory and Cognitive Biases

     

     

    Regulatory impact assessment (RIA) is a regular part of modern lawmakers' toolkit aiming to improve legislation coherency and promote efficient decision-making by supporting creation of evidence-based policies. RIA is a systemic evidence-based approach that employs economic methodology to assess the proposed legislation and its alternatives. In its very essence, it requires legislators to perform and document a cost-benefit analysis. Our research explores RIA processes and the biases characteristic to RIA in a case study of the transposition of Article 17 CDSM Directive in Czechia. Article 17 CDSM is a substantive piece of legislation that shapes online copyright and use of user generated content. The presentation shall provide a brief background of RIA in Czechia and insight into how the RIA of Article 17 CDSM was performed in Czechia. We identify the biases that negatively impact RIA by using the tools of economic theory. First, the presentation shall focus on the Law and Economics framework and positive analysis. Our research employs public choice theory to provide an explanation of observed inadequacies and reveal a challenged legal landscape, pointing out the RIA process is mostly formal, not material. Viable explanations range from regulatory capture to the improper chronological order of legal drafting and RIA. RIA is also vulnerable to observing technology as static which is highly problematic in the landscape of rapid technological advancements, such as the online copyright arena. Second, the presentation shall also offer insight from behavioral economics, as the possible influence of cognitive biases on the RIA process is explored in depth. These findings should inform and enrich practical legislative processes and promote good law-making.

    12:30

    13:30

    Lunch Break

    13:30

    14:00

    Move to Austrian Supreme Court

    14:00

    15:10

    Panel Discussion at the Austrian Supreme Court

     

     

    Exploring the Human-Technology Interface of Platform Liability: the DSA and DMA, and Bias in the Digital Age

     

     

    Panelists

    Ranjana Andrea Achleitner, 
    Institute for European Law, Johannes Kepler University Linz

    Alexandra Ciarnau, DORDA Rechtsanwälte GmbH

    Harald Leitenmüller, CTO Microsoft Austria

    Maria Lohmann, epicenter.works - Plattform Grundrechtspolitik

    Eugenia Stamboliev, 
    Philosophy of Media and Technology, University of Vienna

     

    Hosted by 

    Boris Kandov, Syed Zulkifil Haider Shah

    Department of Innovation and Digitalisation in Law, University of Vienna

     

     

    As the digital landscape continues to evolve rapidly in the EU´s services and eCommerce sectors, so too does the EU´s ability to respond to such phenomena through legal interventions aimed at governing the use of certain technologies. The DSA and DMA are two prime examples of recent legislations aimed at regulating digital platforms and online markets. However, the arguably wide-spectrum and novel regulatory emphasis of these legislations in digital markets raises important questions, most notably, the issues of bias in law-making. These issues impact not only upon the credibility and potential efficiency of these legislations, but more broadly, also shed light upon the ever-pertinent questions concerning the legitimate role of law in regulating economies, societies, and human-technology interactions/relations in contemporary digital spaces. This interdisciplinary panel discussion will bring together experts from a variety of fields to explore these complex, multi-faceted issues and consider the potential impacts of these legislations on individuals, businesses, and society as a whole. It will provide a timely and nuanced examination of the intersection of platform liability, the DSA and DMA, and bias in the digital age. By bringing together a diverse group of experts, attendees will have a unique opportunity to gain a holistic understanding of issues regarding bias on online platforms and ask questions directly from the experts.

    15:10

    15:30

    Elisabeth Lovrek

    President of the Austrian Supreme Court

    Closing Words