AI is increasingly becoming part of association workflows. And while that brings amazing capabilities and efficiencies, it also introduces complex questions around copyright ownership, fair use, and the legal implications of using AI to generate content.
If your association is starting to grapple with these issues, you should listen to this episode of Higher Logic’s The Member Engagement Show in which attorney Dorothy Deng, a partner at Whiteford Law, walks through the evolving relationship between generative AI and copyright law.
Dorothy explains the fundamentals of copyright, including the human authorship requirement and the fair use doctrine, and how they apply in the age of AI-generated content. She also discusses the practical implications for associations and nonprofits—from protecting proprietary work to evaluating licensing opportunities for training AI systems.
Dorothy is a partner at Whiteford Law, specializing in legal issues for associations, nonprofits, and political organizations. She advises on digital content, corporate governance, copyright, social media, privacy law, and now generative AI.
Two main copyright principles shape the conversation around AI-generated content:
No, copyright protection requires a human creative contribution. Purely AI-generated content can’t be copyrighted according to US law. In March 2025, a programmer tried to register copyright for an image generated by his “Creative Machine.” The court affirmed content created solely by AI isn’t subject to copyright protection and automatically enters the public domain.
However, if a human significantly edits, arranges, or builds upon AI outputs, that human contribution may be copyrightable. For example, an artist who provided a hand-drawn sketch and used AI to enhance it was granted copyright—excluding the AI-only elements
The monkey selfie case involved wildlife photographer David Slater, who set up a camera in Indonesia and a monkey named Naruto used it to take a selfie. PETA sued on behalf of the monkey, claiming copyright ownership for it. In this case, though, the court confirmed you must be human to own a copyright. That means non-human entities, like AI, cannot.
If you’re looking for a percentage, there isn’t one. But just writing detailed prompts or revising prompts multiple times doesn’t amount to enough human control to give you ownership of the AI output. If you substantially edit, rearrange, coordinate, or modify the AI output with enough human expression, you may establish copyright ownership.
In Dorothy’s words, “If your job is to create content for your association, you should not be lazy. You should make sure there is sufficient human contribution to the output, especially for high proprietary content.”
Disclosing that content is “AI-generated” is advisable in some situations. But if you’re labeling any content as AI-generated where you’ve used AI at all (even a little bit), that could jeopardize your ownership claims.
“It makes me nervous when sometimes I see a label that says ‘AI-generated.’ You could be jeopardizing your ownership claim if it suggests it’s purely AI generated,” says Dorothy.
To suggest that your content is entirely AI-created means it’s not eligible for copyright protection. Approach labeling content with AI-generated elements with an eye toward transparency – clarify when content is purely AI-generated vs. a hybrid of AI assistance and human authorship.
Fair use is not a blanket right, explained Dorothy, “The fair use doctrine is a legal defense. It’s available if you get sued in court. The burden of proof is on the defense side to show that the use qualifies as fair.”
Fair use has four factors: (1) purpose and character of use, (2) nature of the copyrighted work, (3) amount of copyrighted work used, and (4) effect on the potential market. Recent court cases involving Meta and Anthropic found using copyrighted content to train LLMs is “highly transformative” and favored fair use.
Both Meta and Anthropic had implemented “guardrails” or “adversarial prompting” to prevent direct copying of copyrighted works in outputs. Plaintiffs claiming their work was stolen couldn’t present evidence that AI outputs infringed their copyrighted materials. But it’s worth noting that many of these cases have dealt with written material.
It may be more likely that audio and visual artists claiming copyright infringement by LLMs would be better able to prove that the use of their copyrighted materials to train the LLMs resulted in enough of a market impact/harm.
During this podcast episode, Dorothy and I focused on US copyright law. Our Australian Marketing Manager, Ange Bruce, dug into how this compares to how things have been shaking out Australia when it comes to AI and copyright law.
| US | AU | Difference |
| Has fair use, a broad and flexible doctrine. It allows unlicensed use of copyrighted works for purposes such as commentary, criticism, education, and, potentially, training AI systems. Courts assess on a case-by-case basis. | Has fair dealing, which is far narrower. It applies only to specific purposes (research, criticism, news reporting, parody, etc.). Using copyrighted material to train or prompt AI would not easily fall under these categories. | Stricter in Australia: Less legal leeway for using third-party works in AI training or outputs. |
| Explicitly rejects AI as an author, but has been testing the boundaries of how much human input is required (e.g., editing AI output can sometimes be enough to claim copyright). | The Federal Court has confirmed that AI cannot be an inventor or author under any circumstances, and copyright only arises from a human’s original expression. | Stricter in Australia: No grey area — AI cannot ever be an author. |
| The Copyright Office has ruled that simply entering a text prompt is not enough for protection, but if there’s significant human creative shaping of AI output, it may qualify. | Prompts are unlikely to ever be enough; only genuine human creative authorship counts. Courts have been less willing than U.S. bodies to entertain “borderline” authorship claims. | Stricter in Australia: Threshold for human involvement is higher. |
| Currently many lawsuits are challenging whether training AI on copyrighted material is infringement, with some arguments that it could be fair use. | No major test cases yet, but under the current law, because there’s no broad fair use defence, AI training on copyrighted data without a licence is more clearly risky. | Stricter in Australia: Until reforms occur, copyright holders are in a stronger position to object to unlicensed AI training. |
Be very careful inputting member data into AI tools. Recent lawsuits highlight risks where user data was allegedly used for AI training after users deleted their accounts.
AI training legal issues extend beyond copyright to include contract law, terms of service violations, and data privacy considerations. That’s why associations should be careful about what data is shared with AI systems.
Get clear on what content needs to be public versus what needs to keep behind paywalls or in member-only areas. Publicly available content is more likely to be used for AI training without permission.
AI systems need high-quality, expert-level data for training to get better. Since associations are experts in their fields, there’s an emerging licensing market for their content to train AI systems. It’s possibly a new revenue stream.
“AI is hungry to learn. To level up, it needs expert-level, high-quality data. Associations are experts in their fields, so there is a licensing opportunity for high-quality materials created for purposes of training AI systems.”
Be clear internally about which AI tools can be used, what types of data can be input, how to document human contribution, requirements for reviewing AI outputs, and provisions in vendor agreements addressing AI use and IP warranties.
Implement a decision-making framework for using AI for different purposes. Consider classifying your data (if you haven’t already). Ask yourself: What are the specific benefits? What are the potential risks? Is this an ethical use case? What level of human oversight is appropriate? Always maintain appropriate human oversight.
Watch Higher Logic’s webinar with ASAE on ways to safely embrace AI.
AI is exciting, but associations should temper that excitement with an awareness of the laws surrounding AI use. The best approach is thoughtful integration of AI and substantial human creativity. That’s what leads to high-quality, protectable work.
As you look to the future, licensing your content for high-quality training data could bring in revenue, especially if your association has specialized expertise. But if you’re considering doing this, you also need to consider where you house your content and data already (if it’s already public, it’s already being pulled into LLMs). Also, data protection and member privacy should always be the primary consideration.
Courts are continuing to shape the laws around AI and copyrights. The best thing an association can do is have qualified legal counsel to check over policies and practices. It’s an investment, but the disputes it avoids could save you a lot of headaches and maximize AI’s role in your organization. If you’re not at the point of discussing things with legal counsel, though, it’s still a good idea to have regular conversations internally: “Even if you don’t have an on-staff legal person, you can start by asking what your internal data privacy practices are,” says Dorothy. “Discussing which AI tools you’re using, and what types of data you’re putting into them.”
As AI transforms how members engage and how associations stay relevant, AI adoption needs clear vision from the top. Watch...
Read MoreHigher Logic is excited to release an AI Search Assistant. Learn how it can make your online community more powerful...
Read MoreAI isn’t the future, it’s already here, and there are many ways you are likely using it! From boosting member...
Read MoreAI is a powerful ally for building connections and value with members and customers. See how we’ve mindfully incorporated AI tools into our solutions.