Quantcast

Capitol News

Sunday, November 17, 2024

Generative AI poses challenges for copyright protection: expert insights

Webp 2f5kdqnyzbo5k36ihcb2a2n6kbd0

Kevin M. Guskiewicz President at Michigan State University | Official website

Kevin M. Guskiewicz President at Michigan State University | Official website

Generative artificial intelligence (AI) has been recognized for its potential to revolutionize creativity and lower barriers to content creation. However, its increasing popularity raises concerns about intellectual property and copyright protection.

Anjana Susarla, Omura-Saxena Professor in Responsible AI at Michigan State University’s Broad College of Business, highlights the challenge posed by generative AI's widespread use. She discusses how individuals and companies might be held liable when AI outputs infringe on copyright protections and explores potential regulations to address this issue.

Susarla explains that generative AI tools like ChatGPT operate using foundational AI models trained on extensive data collections. These models employ machine learning techniques such as deep learning and transfer learning to comprehend data relationships, enabling them to perform tasks that mimic human cognition and reasoning.

The risk of copyright infringement arises because users can create text, images, or videos that violate copyright law through selective prompting strategies. Generative AI tools typically do not provide warnings about potential infringements, leading to concerns about users unknowingly violating copyright protections.

AI companies argue that training on copyrighted works does not constitute infringement since the models learn associations rather than copying data directly. Stability AI, creator of Stable Diffusion, claims their outputs are unlikely to closely match specific images from training data. Despite these arguments, audit studies indicate that prompts can lead to copyright violations by producing works resembling protected content.

Determining infringement involves identifying close resemblances between expressive elements of a work created with generative AI and original expressions in existing works. Researchers have demonstrated methods like training data extraction attacks that can recover individual examples from training datasets.

To combat infringement risks, researchers suggest developing guardrails within AI tools. Known as the ‘Snoopy problem,’ this challenge involves protecting likenesses more likely copied by generative AI than specific images. Computer vision research into detecting counterfeit logos or patented images may aid in identifying violations.

AI safety measures such as red teaming—testing models for vulnerabilities—and ensuring model training reduces similarity between outputs and copyrighted material could help mitigate risks. Some companies like Anthropic pledge not to use customer-produced data for training advanced models like Claude.

Policymakers could play a role in establishing legal and regulatory guidelines for best practices in copyright safety. Filtering or restricting model outputs might limit infringement risks, while regulatory intervention could ensure datasets and model training reduce the likelihood of violating creators' copyrights.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS