Spotlight on IP: Do You Own It? AI and Copyrights

sesame via istockphoto

As AI becomes increasingly more ingrained in students’ lives at Vanderbilt, it is easy to use it without thinking about the actual effects this practice has on a student’s ownership of their work. On the surface, it may seem as though if a student put thought into a submitted assignment, that must mean that they own it. However, any use of AI during the process of making the assignment adds nuance to whether a student can, truly, own their work.

What determines who owns a work?

Traditionally, a copyright determines who owns “original works of authorship” such as photographs, computer programs, and books. According to the US Supreme Court, a copyrighted work must include a “spark” and “modicum” of originality. Once someone has created an original work, they can be a copyright owner, even without formal documentation, for the duration of the their life plus 70 years.

However, owning copyrights is not as simple as it may seem now that AI is in the picture. Specifically, the case Doe v. GitHub sheds light on the dent that AI puts in traditional copyright policies. In this case, GitHub, a website that allows computer scientists to share their code with each other, was sued by a group of developers led by Matthew Butterick. They sued GitHub because GitHub and OpenAI created Copilot, which is an AI coding program, in 2021. The purpose of Copilot was to streamline GitHub users’ coding process by predicting the next lines of code a user would write based on the user’s initial input. However, the developers that sued GitHub claimed that Copilot was suggesting copyrighted code to GitHub users without providing correct attribution .

This case was closed in California in July 2024, when the court dismissed most claims, but, regardless of the outcome, the case still poses many questions regarding the legal issues of AI having access to copyrighted content and whether content created with the help of AI is still eligible to be copyrighted.

How are laws about copyright agreed upon?

AI already makes standardizing the approach to determining what can be copyrighted difficult. However, the fact that copyrights have policies on an international scale make this attempt at standardization even more difficult. Agreements such as the TRIPS (Agreement on Trade-Related Aspects of Intellectual Property Rights) and Berne Convention have been created to help guide intellectual property on a global scale.

An interview with Dr. Daniel J. Gervais, a professor at Vanderbilt’s Law School and the author of The TRIPS Agreement: Drafting History and Analysis, which now has five editions, revealed important insights about the future of these agreements in light of AI. Specifically, he stated that “[TRIPS and the Berne Convention] are more flexible than they might initially appear. Together, Berne and TRIPS apply to over 180 countries. On a number of issues, they were deliberately drafted at a relatively high level of abstraction, which allows domestic legal systems to adapt core concepts like ‘authorship’ and ‘originality’ over time. In that sense, TRIPS does not require immediate amendment to accommodate AI-assisted creation.”

While these agreements may not have to be adjusted, Dr. Gervais did point out that the very idea that serves as the foundation for these agreements may be breaking down in the presence of AI. In Dr. Gervais’ words, “as AI systems begin to generate outputs that are indistinguishable from human works, that underlying assumption is strained.” Since these agreements were created to protect exclusively human work, it is difficult to understand how they will continue to operate as humans receive help from AI. To address this discrepancy, Dr. Gervais stated that “over time, that divergence [from exclusively human work] itself could generate pressure for clarification at the international level.” However, even as AI challenges identifying true human expression, Dr. Gervais has argued in his papers that human thought “is actually required by Berne and TRIPS” when determining what can be copyrighted.

How are Vanderbilt students affected by this?

Often when a student creates a code, writes a paper, or makes any other creative work, they are the owner of that work. However, whether they used AI in the process and to what extent determines whether their work is true “human expression” and, thus, can be copyrighted. Below, four scenarios a student may encounter are described and the degree to which the work produced could be copyrighted is analyzed based on my interview with Dr. Gervais:

  • A student asks ChatGPT DALLE (its photo generator) to make a photo of a “pink, flowery heart” and makes no further edits to the output.

The student here put very minimal thought into the production of this photo, even though they created the prompt. According to Dr. Gervais, “if a student simply inputs a prompt into an AI system and uses the output as-is, their claim to copyright is likely to be weak or nonexistent in many jurisdictions.” Here, the student would not be the owner of their work.

  • A student is participating in a Hack-A-Thon and comes up with a creative idea for a code to address the task and asks ChatGPT how to refine their idea, but ultimately writes the code themselves.

While AI was used during the brainstorming process, the student ultimately crafted the code themselves. Dr. Gervais states that “if [the student] meaningfully shape[s] the output, they are more likely to have a valid claim, at least over those human contributions if those contributions are separable from the machine’s.” Here, the student should own their work.

  • For a student’s data science class, they must find data about the number of people in a region getting Starbucks every day. The student consults AI to acquire this data, but then independently cleans the data, generates graphs, and creates a written analysis based on the data.

There are more nuances than the previous examples in this scenario, given that the data the student got from AI could have come from a copyrighted source, like the situation with Doe v. GitHub.  According to Dr. Gervais, “while students are unlikely to bear direct liability for training practices” or using this data, they still must be aware that it could impact their ownership of the work, even if they analyzed the data without AI. Here, it is unclear whether the student owns their work.

  • A student is coding a website for their start-up, and is unsure what to do, so they ask AI to write the code for them. To make some edits, the student makes changes to 50% of the lines of code.

This scenario presents a similar nuance to the previous issue. Even though the student edited what AI created and prompted the AI to make the code, the code was still ultimately made by AI. In this situation, it is likely that a student would only have ownership over the lines of code they wrote themselves.

 At the end of the day, even though AI can output results, Dr. Gervais notes that it is important to consider that “humans have depended on humans to make progress in art, literature, journalism, and generally the production of new ideas via essays and scientific publications. Delegating this task to machines changes the arc of history and progress. Moreover, as neuroscientists have shown, over time humans will lose the ability to perform cognitive tasks delegated to the machine.” Therefore, even if a student can retain ownership of their work, despite using AI, it is important for students to be thinking independently prior to turning straight to this resource.

__________________________________________

Interested in learning more about the business of IP and its intersection with AI? Check out these other articles in Sasha Eckler’s 3-part Spotlight Series:

Law, Science, and Business: The Ingredients for Patent Law

If AI Creates, Who Gets Credit?

By Sasha Eckler

Related Posts