industry-news

AI Regulation in 2026: What Creative Businesses Need to Know

Oakgen Team9 min read
AI Regulation in 2026: What Creative Businesses Need to Know

The era of unregulated AI is over. In 2023, generative AI tools operated in a legal gray zone -- powerful, widely adopted, and almost entirely unregulated. By 2026, the regulatory landscape has transformed. The EU AI Act is in active enforcement. Fifteen US states have passed AI-related legislation. China's Generative AI regulations are mature and expanding. Brazil, Canada, Japan, the UK, and India are all advancing their own frameworks.

For creative businesses -- agencies, studios, freelancers, and brands that use AI to generate images, video, audio, and text -- these regulations are not abstract policy. They create concrete obligations around disclosure, copyright, data handling, and liability that affect daily operations.

This article provides a practical overview of the regulatory landscape as it stands in late 2025 and early 2026, focused specifically on what creative professionals and businesses need to understand and act on. This is not legal advice -- consult qualified counsel for your specific situation -- but it is an informed guide to the terrain.

The EU AI Act: The Most Comprehensive Framework

Structure and Timeline

The EU AI Act, which entered into force in August 2024, is the world's most comprehensive AI regulation. It uses a risk-based classification system:

  • Unacceptable risk: Banned (social scoring, real-time biometric surveillance in public spaces)
  • High risk: Strict requirements (hiring systems, credit scoring, law enforcement)
  • Limited risk: Transparency obligations (chatbots, emotion recognition, deepfakes)
  • Minimal risk: No specific requirements (spam filters, AI-enhanced games)

Most creative AI applications fall into the limited risk category, which triggers transparency obligations rather than prohibitions or extensive compliance requirements. The timeline for enforcement is staggered: unacceptable risk provisions took effect in February 2025, transparency and governance obligations in August 2025, and high-risk system requirements in August 2026.

The Transparency Obligation Is Live

Since August 2025, the EU AI Act requires that AI-generated content be labeled as such when it is published. This applies to images, video, audio, and text that could be mistaken for human-created content. The requirement is on the deployer (the business publishing the content), not the AI tool provider. Creative businesses operating in or targeting EU markets must have labeling practices in place now. Fines for non-compliance can reach up to 35 million EUR or 7% of global turnover.

What Creative Businesses Must Do

For most creative agencies and studios using AI tools, the EU AI Act creates four primary obligations:

  1. Label AI-generated content: Any content published that could be mistaken for human-created must be clearly marked. This includes AI-generated images, videos with AI-generated elements, synthetic voices, and AI-written text used in public-facing materials.

  2. Maintain records: Document which content was AI-generated, which tools were used, and the prompts or inputs that produced the output. This creates an audit trail for regulatory inquiries.

  3. Ensure human oversight: AI-generated content used in commercial contexts should be reviewed by a human before publication. Fully automated content pipelines without human review create regulatory risk.

  4. Inform employees and contractors: Staff who interact with AI systems must be informed that they are doing so and given sufficient training to understand the limitations and biases of the tools.

General-Purpose AI Model Obligations

The EU AI Act also regulates the AI model providers themselves (OpenAI, Google, Stability AI, etc.) under the "General-Purpose AI" (GPAI) provisions. Providers of GPAI models must publish training data summaries, comply with EU copyright law, and provide technical documentation. Models classified as posing "systemic risk" (generally those trained with more than 10^25 FLOPs) face additional obligations including adversarial testing and incident reporting.

This matters for creative businesses because it means the tools you use are themselves subject to compliance requirements. Model providers are responsible for training data legality and technical safety, which reduces (but does not eliminate) downstream liability for tool users.

The US Landscape: A Patchwork of State Laws

No Federal AI Law (Yet)

As of late 2025, the United States has no comprehensive federal AI regulation. The Biden administration's Executive Order 14110 on AI Safety (October 2023) established guidelines and reporting requirements for frontier AI models but did not create binding obligations for businesses using AI tools. Congressional efforts to pass comprehensive AI legislation have stalled, with competing proposals from multiple committees.

The result is a patchwork of state-level legislation that creates an increasingly complex compliance landscape for businesses operating nationally.

Key State Regulations

FeatureStateLaw/BillKey Provisions for Creative AIStatus (Late 2025)
CaliforniaSB 942 (AI Transparency Act)Watermarking requirement for AI content, disclosure in political adsEnacted, enforcement 2025
ColoradoSB 24-205 (AI Consumer Protections)Disclosure for consequential AI decisions, bias auditsEnacted, effective Feb 2026
IllinoisAI Video Interview Act + amendmentsConsent for AI in hiring, deepfake restrictionsEnacted
New YorkVarious bills (city + state)AI hiring audits, synthetic media disclosurePartial enactment
TexasHB 1709Deepfake restrictions, political ad disclosureEnacted 2025
TennesseeELVIS ActVoice and likeness protection against AI replicationEnacted 2024
UtahAI Policy ActAI disclosure in regulated interactionsEnacted 2024

The Disclosure Trend

Across states, the clearest trend is mandatory disclosure. Nearly every state that has passed or is considering AI legislation includes requirements that AI-generated content be labeled when presented to consumers. For creative businesses, this means:

  • AI-generated images used in advertising must be disclosed
  • Synthetic voices must be identified as AI-generated
  • AI-generated video content, particularly that depicting real people, requires labeling
  • AI-written marketing copy used in regulated industries (finance, health, real estate) may require disclosure

The specific requirements vary by state, creating compliance complexity for nationally operating businesses. The practical approach most firms adopt is defaulting to the strictest applicable standard -- California's -- and applying it uniformly.

The Foundational Question

Copyright is the issue with the most direct impact on creative businesses using AI. The foundational question is twofold: (1) are AI model providers liable for using copyrighted works to train their models, and (2) who owns the copyright in AI-generated output?

Neither question has been definitively answered, though the landscape is becoming clearer.

Training Data Litigation

Multiple major lawsuits are working through US courts:

  • Getty Images v. Stability AI (Delaware, filed 2023): Alleges Stable Diffusion's training on Getty's copyrighted images constitutes infringement. Discovery ongoing as of late 2025.
  • Andersen v. Stability AI, Midjourney, DeviantArt (N.D. Cal., filed 2023): Class action by visual artists. Partially survived a motion to dismiss; proceeding to discovery.
  • NYT v. OpenAI (S.D.N.Y., filed 2023): The New York Times alleges ChatGPT's training on its articles constitutes infringement. The highest-profile case, with potential to set broad precedent.
  • Universal Music Group v. AI music services (multiple filings): Alleges AI music models trained on copyrighted recordings without license.

No US court has issued a definitive ruling on whether training AI models on copyrighted data constitutes fair use. The outcomes will have massive implications. A ruling that training is not fair use could require AI companies to license all training data retroactively -- potentially an existential cost. A ruling that training is fair use would validate current practices but would not address all creator concerns.

The Fair Use Factors in AI Context

US fair use analysis considers four factors: purpose and character of use (commercial vs. transformative), nature of the copyrighted work, amount used, and market impact. AI training is arguably transformative (creating a new tool, not a copy), but the scale (billions of works) and commercial nature complicate the analysis. Most legal scholars predict a nuanced outcome -- fair use for some model types and applications, not for others -- rather than a blanket ruling in either direction.

Ownership of AI-Generated Output

The US Copyright Office has taken an increasingly clear position: purely AI-generated content is not copyrightable. The Thaler v. Perlmutter decision (2023) confirmed that copyright requires human authorship. The Copyright Office's 2023 guidance on AI-generated works states that works created by AI without meaningful human creative input cannot receive copyright registration.

However, the Copyright Office has granted registration for works that involve substantial human creative expression alongside AI generation. The key is "meaningful human creative input" -- selection, arrangement, modification, and creative direction of AI outputs can establish copyrightability for the overall work, even if individual elements were AI-generated.

For creative businesses, the practical implications are:

  • Raw AI output (unmodified images, text, video) likely has no copyright protection
  • Curated, arranged, and modified AI content can be copyrighted if human creative expression is substantial
  • Document your creative process to establish human authorship claims
  • Do not rely on copyright protection for purely AI-generated assets used in competitive markets

Deepfakes and Synthetic Media

The Regulatory Crackdown

Deepfake regulation has accelerated faster than any other AI-related area. By late 2025, the EU AI Act requires labeling of all deepfakes, the US DEFIANCE Act (2024) creates civil liability for non-consensual intimate AI imagery, China requires consent for using anyone's likeness in AI-generated content, and multiple US states have criminalized specific categories of deepfakes.

For creative businesses: using AI to generate content depicting real people requires explicit consent, clear AI labeling, compliance with personality rights, and documentation of the creation process. Voice cloning has attracted particular attention -- Tennessee's ELVIS Act (2024) was the first US law specifically addressing AI voice replication, and several states have followed. The rule is simple: never clone a real person's voice without explicit, documented consent.

FeatureSynthetic Media TypeKey Legal RequirementsRisk LevelBest Practice
AI-generated images (original)Label as AI-generated in EU, select US statesLow-MediumAlways label in commercial contexts
AI images depicting real peopleConsent required, personality rights applyHighWritten consent + labeling
AI video (original characters)Label as AI-generatedLow-MediumLabel, maintain creation records
AI video depicting real peopleConsent required, deepfake laws applyVery HighExplicit consent + legal review
AI voice (original)Disclose synthetic natureLowLabel as AI-generated
AI voice cloning (real person)Consent required, ELVIS Act + state lawsVery HighWritten consent + legal review
AI-generated musicCopyright status unclear, label recommendedMediumLabel, avoid mimicking specific artists

Practical Compliance for Creative Businesses

Building a Compliance Framework

Creative businesses do not need enterprise-grade AI governance systems. They need practical, proportionate processes that manage risk without paralyzing operations.

A workable framework includes:

Policy: A one-page AI usage policy stating which tools are approved, what content requires labeling, what requires human review, and what is prohibited (e.g., generating likenesses of real people without consent).

Labeling standards: Consistent metadata and visible labeling for AI-generated content. This can be as simple as including "Created with AI" in image metadata and post descriptions.

Record keeping: Maintain a log of AI-generated content including the tool used, date, prompt or input description, human review status, and publication location. A spreadsheet is sufficient for most businesses.

Training: Ensure everyone who uses AI tools understands the applicable disclosure requirements and company policy. A 30-minute training session annually is adequate for most teams.

Vendor review: Understand the terms of service and IP provisions of the AI tools you use. Most major platforms (Oakgen included) grant users commercial rights to generated output, but terms vary.

Start Simple, Stay Consistent

Regulatory compliance for AI creative tools does not need to be complicated. The most important step is consistency: pick a labeling standard, apply it uniformly, and document your process. Businesses that demonstrate good-faith compliance efforts face dramatically lower regulatory risk than those with no process at all. Start with a simple AI usage log and disclosure checklist.

What Comes Next

The regulatory trajectory is toward more regulation, not less. Key developments to watch: US federal AI legislation is likely by 2027, potentially preempting the state patchwork. The Getty, Andersen, and NYT copyright cases will likely produce significant rulings in 2026. The G7 Hiroshima AI Process is pushing toward international standards. And organizations like C2PA are developing content provenance standards that may become de facto requirements.

The businesses best positioned are those that treat compliance as a normal operating cost rather than an obstacle. The requirements are manageable, the penalties for willful non-compliance are substantial, and the competitive advantage of trustworthy, transparently created content is growing.

Frequently Asked Questions

Can I use AI-generated images commercially?

Yes, in most jurisdictions, AI-generated images can be used commercially. The key requirements are: use tools whose terms of service grant commercial rights (most major platforms do), label AI-generated content as required by applicable law (mandatory in the EU and several US states), do not generate images that infringe on specific copyrighted works or depict identifiable real people without consent, and understand that purely AI-generated images may not be copyrightable.

In the US, purely AI-generated content (no meaningful human creative input) is not copyrightable under current Copyright Office guidance. However, if you substantially modify, curate, arrange, or creatively direct AI output, the resulting work can be copyrighted based on your human creative expression. The EU position is similar, though member state implementation varies. Document your creative process to support ownership claims.

What are the penalties for non-compliance with AI disclosure laws?

Penalties vary by jurisdiction. The EU AI Act allows fines up to 35 million EUR or 7% of global turnover for the most serious violations. US state laws vary, with California's AI Transparency Act carrying fines up to $50,000 per violation. In practice, early enforcement is focusing on egregious cases of non-disclosure and deceptive AI use rather than minor labeling oversights.

Should I tell clients that I use AI tools?

Yes. Both ethically and legally, transparency about AI usage in client work is the safest approach. Many client contracts now include AI disclosure clauses, and failing to disclose AI usage could constitute breach of contract or misrepresentation. Proactively communicating how AI enhances your work -- faster iterations, more options, lower costs -- positions AI usage as a value-add rather than a secret.

How do I keep up with changing AI regulations?

Follow the AI policy organizations that track regulatory developments: the Future of Life Institute, the AI Now Institute, and the IAPP (International Association of Privacy Professionals) all publish regular updates. Subscribe to your industry association's regulatory alerts. For businesses with significant AI exposure, an annual review with a technology-focused attorney is prudent.

Create with Confidence on Oakgen

Oakgen provides commercial-ready AI generation with clear usage rights. Generate images, video, audio, and music with transparent terms. Free credits on signup.

Start Creating Free
AI regulation 2026AI lawcreative AI regulationAI complianceAI business law
Share

Related Articles