PI here from a humanities lab. I want to frame a normative study on authorship and accountability in AI-assisted creative works across jurisdictions (EU/US/Japan) and creative guilds. Goal: clarify where moral agency sits (tool, user, platform) and propose a workable authorship model. I prefer argumentative analysis plus 10–12 stakeholder interviews. What scope and questions keep it philosophically rigorous yet relevant to practice?

Executive Summary: Navigating Authorship and Accountability in AI-Assisted Creative Works

This normative study comprehensively investigates the complex challenges surrounding authorship and accountability in the rapidly evolving landscape of AI-assisted creative works. Driven by the need to clarify moral agency and propose a workable authorship model, the research integrates philosophical inquiry, comparative legal analysis across the EU, US, and Japan, and qualitative insights from diverse creative guilds.

The study establishes that traditional, human-centric notions of authorship are increasingly strained by AI’s generative capabilities. Philosophically, while AI systems lack true moral agency (consciousness, intentionality, free will), their “functional agency”—the capacity to autonomously produce outcomes—necessitates a re-evaluation of responsibility. The research clarifies that moral agency remains firmly with human users, but AI tools and their developers/platforms bear “functional” and “design responsibility” due to inherent biases, design choices, and deployment mechanisms.

A comparative legal analysis reveals that all three jurisdictions (EU, US, Japan) largely maintain a human authorship requirement for copyright. However, significant divergences exist in their regulatory approaches, particularly concerning the use of copyrighted material for AI training (e.g., Japan’s permissive Article 30-4 vs. the US’s litigated “fair use” and the EU’s TDM exceptions) and the development of AI liability frameworks (e.g., the EU’s proactive AI Act and proposed liability directives vs. the US’s reliance on adapting existing tort law). These differences underscore varying philosophical priorities regarding innovation, human rights, and the balance between public and private interests.

Insights from creative guilds (visual arts, music, writing, design, film/animation) highlight pervasive economic anxieties, particularly concerning job displacement and the uncompensated use of their works for AI training. Universal demands include transparency regarding AI involvement, clear attribution practices, and robust protection of artists’ digital likenesses and voices. These practical concerns underscore the urgent need for frameworks that are not only legally sound but also economically viable and ethically responsible.

To address these complexities, the study proposes the “Spectrum of Contribution” Authorship Model. This model posits that authorship is a distributed phenomenon, with varying degrees of responsibility based on the nature and extent of human creative input and control. It delineates three categories:

  1. Human-Driven Creation (Primary Authorship): Where the human provides significant intellectual creation and transformative input, retaining full copyright. The AI acts as a sophisticated tool.
  2. AI-Assisted with Human Curation (Derivative or Hybrid Authorship): Where AI generates substantial content, and the human’s role is primarily curation, selection, and significant non-transformative modification. The human is considered the author of a derivative work. This category also opens the discussion for potential sui generis rights for valuable AI-generated content lacking sufficient human originality, though this would require significant legislative reform.
  3. Autonomous AI Generation (No Human Authorship): Where minimal to no human creative input exists, rendering such works generally uncopyrightable under current laws. This scenario prompts consideration of sui generis rights to incentivize the development of genuinely autonomous creative AI.

The model clarifies accountability by reaffirming the human user as the primary moral agent. Developers and platforms are held responsible for the inherent safety, legality, and biases of the AI systems they create and deploy, as well as for the ethical sourcing of training data. This distribution operationalizes accountability through mandatory transparency (e.g., AI usage declarations), the establishment of fair remuneration mechanisms, and the clear assignment of product liability to developers and platforms for design flaws, training data biases, and potential infringements. The overarching goal is to ensure AI serves as an augmentative force for human creativity and flourishing.

While comprehensive, the study acknowledges limitations, including the qualitative nature of stakeholder interviews (limiting statistical generalizability) and the focus on three specific jurisdictions. The rapid pace of AI development also necessitates adaptive frameworks. These limitations open avenues for future research, including empirical studies on AI’s economic impact, deeper exploration of specific ethical dilemmas (e.g., deepfakes, AI-generated propaganda), and the development of international standards or treaties for AI intellectual property. The study concludes by emphasizing the critical need for ongoing interdisciplinary dialogue and agile policy-making to harmonize technological progress with human values in the AI era.


Table of Contents

Introduction and Research Framing

Reshaping Creation: Interdisciplinary Conceptual Foundations for Authorship and Accountability in AI-Assisted Creative Works

Comparative Legal and Regulatory Landscape Analysis: Implications for Moral Agency and Authorship

Creative Guilds and Industry Practices: Stakeholder Perspectives

Crafting a New Paradigm for AI-Assisted Creation: The “Spectrum of Contribution” Authorship Model and Accountability Framework

Conclusion and Future Directions