• U.S. State Privacy Laws: 2025 Status Update

    By Saachi S. Gorinstein and Scott C. Hall

    By the end of 2025, eight new states will have enacted comprehensive privacy laws: Delaware, Iowa, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Tennessee. With twenty states expected to have such laws effective by year’s end and more than a dozen additional states actively considering similar legislation for 2026 and beyond, businesses must continue to navigate an increasingly complex and fragmented regulatory landscape. While all state privacy laws share common core principles such as transparency in notice, data minimization, and opt-out rights for certain data usage, other aspects such as applicability thresholds, consumer rights, and enforcement mechanisms vary significantly across jurisdictions, all in the absence of a unifying federal privacy framework.

    General Principles of State Privacy Laws

    Certain baseline privacy principles remain consistent across all states. Businesses operating in any jurisdiction should provide clear notices to consumers about how their data is collected, used, and disclosed, and should limit the use of data collected to specific, disclosed purposes. Businesses should ensure they are collecting only the data necessary for legitimate business purposes and using it solely for the purposes stated in clear and conspicuous privacy notices.

    Consumer Rights

    Most states grant consumers a core set of rights that typically include the ability to access, delete, and correct personal data; request copies of their data (data portability); and opt out of targeted advertising, the sale of personal data, and certain types of profiling. However, there are notable exceptions. Iowa’s law does not provide consumers with the right to correct inaccurate data or to opt out of processing for targeted advertising and profiling, limiting individual control compared to other states. In contrast, Minnesota extends consumer protections by allowing individuals to understand the basis of profiling decisions, access the data used, and pursue alternative outcomes. Minnesota also grants a transparency right (similar to Oregon’s and Delaware’s) allowing consumers to request a list of third parties that have received their data. Maryland takes a more limited approach, allowing consumers to request a list of categories of third parties to whom their data has been disclosed.

    Opt-In Preferences and Data Protection Impact Assessments

    All state privacy laws require businesses to honor opt-out requests, and some require respect for universal opt-out preference signals through mechanisms such as Global Privacy Control (GPC), which allow consumers to communicate their preferences regarding the sale of personal data and targeted advertising across all websites without needing to opt out individually. Amidst enforcement attention on this topic from California regulators, new laws in Delaware, Nebraska, New Hampshire, and New Jersey require recognition of such signals, with Maryland and Minnesota set to align by the end of the year.

    Many new state laws also require businesses to conduct data protection impact assessments (“DPIAs”) and/or internal or external audits when engaging in “high-risk” processing. This typically includes activities such as selling or sharing data for targeted advertising, profiling, or processing sensitive personal information.

    Sensitive Information

    All state privacy laws, including those taking effect in 2025, impose heightened restrictions on the collection and processing of sensitive information, and several expand what qualifies as “sensitive.“ New categories include national origin (Delaware, Maryland, New Jersey), transgender or non-binary status (Delaware, Maryland, New Jersey), biometric data (Maryland, Tennessee), and certain financial account information (New Jersey). Maryland’s law is particularly stringent, with a broad definition of “consumer health data” that includes information related to gender-affirming treatment and reproductive or sexual health care, and it prohibits processing or sharing sensitive information unless strictly necessary for a consumer requested service even with consent. Additionally, new state laws in Delaware, Maryland, Nebraska, New Hampshire, New Jersey, and Tennessee follow several already enacted state laws in requiring businesses to conduct DPIAs when processing sensitive data or engaging in other high-risk activities.

    Applicability Thresholds of State Privacy Laws

    Determining which state privacy laws apply to your business requires careful analysis. While California, Tennessee, and Utah use revenue-based thresholds (e.g., $25 million) either alone or in combination with other factors, most states rely on volume-based criteria, typically applying to businesses that process the personal data of 100,000+ residents or derive a certain portion of revenue from selling data.

    Several states have lower or broader thresholds:

    • Montana: Applies to businesses collecting personal information of 50,000 consumers, or 25,000 if 25%+ of revenue comes from data sales.
    • Maryland, New Hampshire, Delaware, Rhode Island (2026): Thresholds begin at 35,000 residents, with Delaware and Rhode Island also using a 20% revenue qualifier.
    • Texas and Nebraska: Among the broadest, apply to nearly any business that is not a “small business” under SBA definitions, with no numerical data thresholds.
    • Florida: Applies only to large for-profit companies with $1 billion+ in global revenue and certain tech-related operations.

    Adding to the complexity, California uniquely includes employee, contractor, job applicant, and business-to-business transaction data under its CPRA, while most other states limit “consumer” to individuals acting in a personal or household context.

    As a result, businesses must be aware of their data collection and processing activities in each state with a privacy law, and must analyze those activities against the requirements of each applicable state law.

    Enforcement of State Privacy Laws

    Like most state privacy laws, the 2025 statutes do not authorize any private rights of action (California remains the exception for certain data breaches involving sensitive personal information). Enforcement authority generally lies with each state’s Attorney General (or, in California, its newly created Privacy Protection Agency), who are expected to take a more active role in investigating compliance and responding to consumer complaints, especially involving sensitive personal data. Most of the new laws also include cure periods, giving businesses an opportunity to correct violations before enforcement proceeds. Notably, New Jersey’s law grants rulemaking authority to the Director of the Division of Consumer Affairs, signaling that additional implementing regulations may follow, similar to frameworks in California and Colorado. A unique provision in Tennessee’s law introduces an affirmative defense to enforcement actions–the first of its kind among U.S. privacy statutes. Businesses may invoke this defense by demonstrating that they maintain a written privacy program that “reasonably conforms” with the National Institute of Standards and Technology (NIST) privacy framework or a comparable standard. This incentivizes the adoption of widely recognized best practices and supports a more proactive approach to privacy compliance.

    Takeaways for Businesses

    With twenty comprehensive privacy laws expected to be effective by the end of 2025 and many more under consideration, privacy compliance is a national business imperative. Although discussions around a federal privacy law continue, no such law has yet materialized. As in the past, companies cannot rely on potential federal intervention to alleviate the burden of multi-jurisdictional compliance.

    It is essential for all businesses to consistently map their data collection, use and disclosure, update privacy policies and notices, implement consumer rights requests mechanisms, honor opt-out and limitation requests, and continue to monitor evolving requirements and implement scalable, principle based privacy programs that can adapt to a shifting—and ever-increasing—patchwork of obligations.

    See the U.S. State Privacy Laws – Applicability Thresholds chart linked here for more details.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

     

  • Effective September 1, 2025: Texas Expands Telemarketing Law to Cover Text Messages

    By Scott Hall

    Beginning September 1, 2025, Texas will broaden the scope of its telemarketing law (Chapter 302 of the Business & Commerce Code) to explicitly include text message marketing within the definition of “telephone solicitation.” Companies using SMS for promotional purposes that include messaging consumers in Texas should assess whether they are now subject to new registration, bonding, and compliance obligations.

    Key Requirements

    Companies engaged in sales-oriented SMS outreach to Texas consumers must:

    • Register with the Texas Secretary of State
    • Submit a $200 filing fee
    • Post a $10,000 security (via a bond, irrevocable letter of credit, or certificate of deposit)
    • File quarterly addenda listing all salespersons engaged in solicitations
    • Comply with disclosure and recordkeeping mandates

    These requirements have historically applied to voice-based telemarketing, but the amendment clarifies their application to modern communication platforms, including SMS.

    Exemptions

    A number of exemptions apply under Subchapter B of Chapter 302, most notably:

    • Outreach to current or former customers: No registration is required if messages are limited to prior customers and the business has operated under the same name for at least two years.
    • Educational Institutions and Nonprofits
    • Publicly Traded Companies and Financial Institutions regulated at the state or federal level
    • Sellers of food products, newspapers, periodicals, or cable subscriptions
    • Retailers with physical locations that have operated for two years under the same name, and a majority of business occurs at those locations as opposed to online
    • Isolated solicitations that are not part of a recurring pattern

    There is not a lot of guidance on the scope of these exemptions, and they may depend on specific facts and circumstances that should be discussed with legal counsel. Also note that the law only exempts sellers (not third-party platforms) unless the provider is contracting predominantly with exempt businesses and meets other criteria.

    Security Requirement

    For those subject to the law, a $10,000 security deposit must accompany the registration. This can be satisfied by:

    • A surety bond from a licensed company
    • An irrevocable letter of credit from a federally insured financial institution
    • A certificate of deposit with restricted withdrawal rights

    The purpose is to create a recovery mechanism for consumers harmed by a seller’s insolvency or contractual breach.

    Broader Compliance Considerations

    While Texas provides a customer-based exemption, companies should also keep in mind other states such as Florida, Maryland, and Oklahoma that have strict SMS marketing laws without such exemptions, as well as federal law (TCPA), which still requires prior express written consent for most automated marketing texts. Companies relying on marketing platforms like Klaviyo or Attentive should also review their vendor contracts to ensure data is being used only on the company’s behalf and in compliance with these rules.

    Recommendations

    • Assess whether your SMS campaigns involve Texas consumers (or consumers in other states with registration or other special compliance requirements).
    • Review eligibility for exemptions, especially under the “former or current customer” carve-out.
    • Confirm you are properly capturing and storing user consent, ideally in a verifiable format.
    • Evaluate whether your platform provider meets service-provider-only criteria, or whether any contract amendments are needed.
    • Prepare registration materials and financial security, if required.

    Please reach out to the Coblentz team for further information or assistance.

  • Navigating the Shifting AI Landscape: What U.S. Businesses Need to Know in 2025

    By Mari Clifford and Scott Hall

    Artificial intelligence is no longer a wild west frontier technology—it’s a regulated one. As AI systems become central to how companies operate, communicate, and compete, legal oversight is catching up. In 2025, AI governance is defined by divergence: a harmonized, risk-based regime in the EU; a fragmented, reactive framework in the U.S.; and rapid regulatory expansion at the state and global levels. Businesses deploying or developing AI must now navigate a multi-jurisdictional patchwork of laws that carry real compliance, litigation, and reputational consequences.

    This article outlines the key regulatory developments, contrasts the EU and U.S. approaches, and offers concrete recommendations for U.S. companies operating AI systems.

    EU AI Act: Global Reach with Teeth

    The EU AI Act, which entered into force in August 2024, is the world’s first comprehensive, binding legal framework for AI. It classifies systems by risk level—unacceptable, high, limited, and minimal—and imposes extensive obligations on high-risk and general-purpose AI (GPAI) models. High-risk AI systems must undergo pre-market conformity assessments, maintain technical documentation, and register in a public EU database. GPAI models face additional transparency, copyright, and cybersecurity obligations, particularly if they exceed scale thresholds (e.g., >10,000 EU business users).

    The Act’s extraterritorial reach means U.S. companies offering AI products or services in the EU—or whose outputs affect EU residents—must comply. Notably, failure to implement the EU’s “voluntary” GPAI Code of Practice could shift the burden of proof in enforcement actions.

    Timeline to Watch: The law becomes enforceable starting August 2026, with GPAI obligations phasing in from 2025.

    The U.S. Approach: Fragmentation, Tension, and State-Level Acceleration

    Executive Orders & Federal Initiatives

    U.S. federal law remains sectoral and piecemeal. President Biden’s 2023 Executive Order on “Safe, Secure, and Trustworthy AI” established guiding principles, including fairness, transparency, and privacy protections, and tasked agencies with issuing AI-specific standards. However, this was rescinded in 2025 by the Trump administration’s new EO prioritizing deregulation and “American leadership in AI,“ creating a sharp policy pivot and regulatory uncertainty. In parallel, the administration also unveiled a draft AI Action Plan, emphasizing voluntary industry standards and innovation incentives over binding rules. While still in flux, this initiative further underscores the unsettled political climate around federal AI policy.

    While bills like the AI Accountability Act and the SAFE Innovation Framework have been proposed, no comprehensive federal AI law has passed. Instead, federal agencies like the FTC, EEOC, and CFPB continue to regulate AI through existing consumer protection and civil rights laws—often through enforcement actions rather than formal rulemaking.

    State Spotlight: Colorado, California, and Others Lead the Way

    Absent a comprehensive federal law, states have moved decisively. The list below highlights a representative sample of enacted state AI statutes as of July 2025; dozens of additional bills are pending and advancing every legislative cycle:

    Arizona

    • HB 2175 – requires health-insurer medical directors to personally review any claim denial or prior-authorization decision that relied on AI, exercising independent medical judgment (in force on June 30, 2026).

    California

    • AB 1008 – expands the CCPA definition of “personal information” to cover data handled or output by AI.
    • AB 1836 – bars commercial use of digital replicas of deceased performers without estate consent.
    • AB 2013 – requires AI developers to post detailed training-data documentation.
    • AB 2885 – creates a uniform statutory definition of “artificial intelligence” (effective January 1, 2025).
    • AB 3030 – mandates clear gen-AI disclaimers in patient communications from health-care entities (effective January 1, 2025).
    • SB 1001 “BOT” Act – online bots that try to sell or influence votes must self-identify.
    • SB 942 AI Transparency Act – platforms with >1M monthly users must label AI-generated content and provide a public detection tool.

    Colorado

    • SB 24-205 Colorado AI Act – first comprehensive U.S. framework for “high-risk” AI; imposes reasonable-care, impact-assessment, and notice duties on developers and deployers (effective 2026).
    • SB 21-169 – bans unfair discrimination by insurers through algorithms or predictive models.
    • HB 23-1147 – requires deep-fake disclaimers in election communications.
    • Colorado Privacy Act – consumers may opt out of AI “profiling” that produces legal or similarly significant effects; DPIAs required for such processing.

    New York

    • New York CityLocal Law 144 – employers using automated employment-decision tools must obtain an annual independent bias audit and post a summary.

    Tennessee

    • HB 1181Tennessee Information Protection Act (2024) – statewide privacy law; impact assessments required for AI profiling posing significant risks.
    • “ELVIS Act” (2024) – makes voice mimicry by AI without permission a Class A misdemeanor and grants a civil cause of action.

    Texas

    • Texas Data Privacy and Security Act – lets Texans opt out of AI profiling that has significant effects and compels risk assessments for such uses.

    Utah

    • SB 149 “AI Policy Act” (amended by SB 226) – requires disclosure when consumers interact with generative-AI chat or voice systems and sets professional-licensing guardrails.
    • HB 452“Artificial Intelligence Applications Relating to Mental Health” – regulates the use of mental health chatbots that employ artificial intelligence (AI) technology.

    Expect additional Colorado-style comprehensive AI frameworks to surface in 2025-26 as states continue to fill the federal gap.

    Global Developments & Cross-Border Tensions

    Beyond the EU and U.S., countries like Brazil, China, Canada, and the U.K. are advancing AI governance through a mix of regulation and voluntary standards. Notably:

    • China mandates registration and labeling of AI-generated content.
    • Brazil is poised to pass a GDPR- and EU AI Act-style law.
    • The U.K. continues to favor a principles-based, regulator-led approach but may pivot toward binding regulation.

    U.S.-EU divergence has triggered geopolitical friction. The EU’s upcoming GPAI Code of Practice is a flashpoint, with U.S. officials warning it could disproportionately burden American firms. Meanwhile, the U.S. may reconsider participation in multilateral frameworks like the Council of Europe’s AI Treaty.

    A Compliance Playbook for 2025

    AI legal exposure increasingly mirrors privacy law: patchwork rules, aggressive enforcement, and high reputational stakes. To mitigate risk, companies should:

    • Inventory AI Systems: Identify all AI tools in use—especially those making or influencing decisions in high-risk sectors (HR, healthcare, finance, etc.).
    • Conduct Risk Assessments: For GPAI or high-risk tools, assess training data, bias exposure, and explainability. Use frameworks like NIST’s AI RMF or the EU’s conformity checklist.
    • Build Cross-Functional Governance: Legal, compliance, technical, and product teams must coordinate. Assign AI risk ownership and create change triggers for reclassification (e.g., changes in use or scale).
    • Monitor State and Federal Law Developments.
    • Plan for EU Market Entry: Determine whether EU-facing AI systems require local representation, registration, or conformity assessment under the AI Act.
    • Audit Communications: Avoid AI-washing. Public statements about capabilities, safety, or human oversight must match internal documentation and performance.

    The message from global regulators is clear: innovation is welcome, but governance is non-negotiable. Whether operating domestically or globally, businesses must prepare for AI compliance to become a core legal discipline, akin to privacy or cybersecurity.

    For legal teams and compliance leaders, now is the time to move from principles to programs—and to see governance as a competitive advantage, not just a regulatory burden.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

  • 2025 Mid-Year Privacy Report

    A Comprehensive Look at New Developments in Data Privacy Laws

    By Scott Hall, Mari Clifford, Leeza Arbatman, Kat Gianelli, Saachi Gorinstein, and Hunter Moss

    Download a PDF version of this report here.

    In 2025, privacy and AI regulation have moved from the sidelines to the center of business risk and strategy. U.S. states are rapidly enacting a patchwork of privacy laws, with new AI laws emerging and expected to increase. Meanwhile, regulators are tightening oversight of automated decision making, children’s data, health metrics, and cross-border data transfers. And litigation over online data collection by companies continues to expand under various statutes, including wiretapping and pen register claims under the California Invasion of Privacy Act (CIPA), and claims under the Video Privacy Protection Act (VPPA), resulting in diverging court rulings that send mixed signals to companies regarding privacy compliance.

    Our Mid-Year Privacy Report examines the most significant developments shaping the privacy and AI landscape in 2025 and highlights practical steps businesses can take to navigate an increasingly complex, multi-jurisdictional legal landscape.

    You can download the full report here. If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

  • New California Regulations Regarding AI Use in Hiring and Employment

    By Fred Alvarez and Hannah Withers 

    If your company is using AI or other automated decision making systems to make employment or hiring decisions, a new set of regulations in California will be going into effect on October 1, 2025 that will require your attention.

    Issued by the California Civil Rights Council (CRC), these regulations define what types of AI systems are being regulated, what types of employment decisions are included, and what employers need to do to prevent discrimination claims resulting from the use of these AI tools. If you (or your agents) are using AI in any manner related to employment decisions, we suggest familiarizing yourself with the key aspects of the regulations, which are summarized below.

    The purpose of the regulations is to respond to concerns about the potentially discriminatory impact of AI tools in employment and to make clear that California’s anti-discrimination laws still apply even when employers are using AI tools to make decisions. The regulations make clear that an employer cannot escape liability for discriminatory hiring or employment decisions based on the fact that the decision was made by an AI generated tool. Rather, even when just using AI tools, companies may still be liable for discriminatory hiring or employment decisions.

    Key Sections of the Regulations to be Familiar With:

    • An “Automated-decision system” is defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit… [it] may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
    • The covered employment and hiring practices are quite broad and include decisions related to recruitment, applicant screening, background checks, hiring, promotion, transfer, pay, benefit eligibility, leave eligibility, employee placement, medical and psychological examination, training program selection, or any condition or privilege of employment.
    • An “agent” includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. An agent of an employer is also an ‘employer’ for purposes of the Act.”
    • Under the rules, it is unlawful to use AI systems that result in employment discrimination based on protected characteristics (e.g. religion, race, gender, disability, national original, age, etc.) The rules also specify that the use of technology that has an adverse impact is “unlawful unless job-related and consistent with business necessity and the technology includes a mechanism for requesting an accommodation”.
    • There is a 4 year record retention requirement for employment records created or received (including applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data) dealing with a covered employment practice or employment benefit.

    Here is What We Recommend Considering and Preparing For:

    • Determine if you are using any AI systems that should be evaluated for possible discriminatory impact. Consider whether you use any of the following systems with employees or applicants:
      • Computer-based assessments or tests, such as questions, puzzles, games, or other challenges.
        Automated systems to direct job advertisements or other recruiting materials to targeted groups.
      • Automated systems to screen resumes for particular terms or patterns.
      • Automated systems to analyze facial expression, word choice, and/or voice in online interviews.
      • Automated systems to analyze employee or applicant data acquired from third parties.
    • If you use an automated decision making system, examine what data is being collected, how decisions are being made, and any possible discriminatory impact that might be made as a result. Consider how applicants or employees in each protected group might be impacted by the way the automated systems are set up and implemented. (e.g., The use of online application technology that limits, or screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation.)
    • Investigate how the automated decision making systems you are using check for bias on behalf of protected classifications. Efforts to check for bias can be used as a defense to claims of discrimination. To check for bias you should investigate whether the AI tool:
      • Monitors for unintended discriminatory effects during the ongoing use of the software.
      • Conducts live bias tracking functionality as decisions are being made.
      • Supports compliance with anti-discrimination requirements in states that require annual independent bias audits of AI tools in hiring.
      • Supports customer compliance with applicant notice requirements in states that have laws relating to the use of AI tools in hiring.
      • Maintains records of hiring decisions and AI processes conducted.
      • Has materials or a white paper regarding employment and privacy law compliance.
      • Has measures in place to prevent discriminatory outcomes in its algorithm design and outputs.
    • If you work with third parties or other agents for hiring or employment decisions, talk to them about what AI tools they are using and learn more about what information those tools gather and how they make decisions. You are responsible for how your agent is using AI tools and you should communicate with agents specifically about their compliance with these California regulations.
    • Review your accommodations policies to make sure that any automated systems being used are not operating in a way that would miss the need for an accommodation.
    • Review your record retention practices for employment and hiring decisions to make sure that you keep records of hiring or employment decisions made using AI systems for 4 years.

     

    The Coblentz Employment team is available to answer any questions you may have about the impact of these regulations and how to prepare logistically ahead of their effective date on October 1, 2025. For additional information, you may also refer to the CRC’s press release.

  • Monkey Business No More: Ninth Circuit Rules NFTs Are Protected by Trademark Law, Confirms the Limits of Expressive Speech Protection, but Overturns Judgment of Likely Confusion

    By Sabrina Larson and Kat Gianelli

    Key Takeaways

    • The Ninth Circuit confirmed that non-fungible tokens (NFTs) are ‘goods’ under the Lanham Act and can be protected by trademark law.
    • Even if a defendant uses a trademark owner’s mark with the goal of commentary and criticism, the fair use doctrine will not protect that use where the defendant uses the mark to designate its own goods.
    • The First Amendment does not protect a defendant’s unlicensed use of a trademark when the use of the mark is at least partially acting as a source identifier, even if the defendant intended such use as satire and expressive speech.
    • The decision is a win for brand owners promoting digital assets.

    The Ninth Circuit ruled in Yuga Labs, Inc. v. Ryder Ripps on July 23, 2025 that non-fungible tokens (NFTs) are eligible for trademark protection under the Lanham Act, a significant development for creators of digital tokens. The Court also confirmed the limits of protection for satirical, expressive speech protection, where the defendant nonetheless uses the plaintiff’s trademarks as source identifiers.

    The Court, however, overturned the lower court’s $8.8 million judgment for Yuga, finding that Yuga had not proven at summary judgment that the defendants’ tokens are likely to confuse NFT buyers.

    Background

    Yuga Labs, who created the NFT “Bored Ape Yacht Club,” sued artists Ryder Ripps and Jeremy Cahen for creating a nearly identical NFT titled “Ryder Ripps Bored Ape Yacht Club,” which was tied to the same ape images as Yuga’s NFTs. Yuga alleged trademark infringement and unlawful cybersquatting.

    Examples of Yuga’s Bored Ape NFTs[1]

    The defendants claimed their project was a satirical protest, and countersued alleging violation of the Digital Millennium Copyright Act (DMCA) and sought declaratory relief that Yuga had no copyright protections over Bored Apes.

    The district court granted summary judgment for Yuga on its trademark infringement claim and anti-cybersquatting claim, and also granted summary judgment for Yuga with regards to the defendants’ DMCA counterclaim, resulting in an $8.8 million judgment for Yuga, which the artists appealed.

    Ninth Circuit Analysis and Decision

    One of the defendants’ defenses was to argue that NFTs are not ‘goods’ under the Lanham Act, but the Ninth Circuit disagreed, holding that NFTs are protectable as ‘goods’ under the Lanham Act and affirming that Yuga’s “Bored Ape Yacht Club” trademarks are enforceable despite their digital nature. This conclusion aligns with the U.S. Patent & Trademark Office, which has also concluded that NFTs are ‘goods.’ The Court reasoned that NFTs are “more than a digital deed to or authentication of artwork” because they “also function as membership passes, providing ‘Ape holders’ with exclusive access to online and offline social clubs, branded merchandise, interactive digital spaces, and celebrity events.” The Court concluded, “Yuga’s NFTs are not merely monkey business and can be trademarked.”

    The defendants also argued that they made nominative fair use of the Yuga marks. A common example of fair use is where one “‘deliberately uses another’s trademark or trade dress for the purposes of comparison, criticism, or point of reference.’”[2] The Court disagreed because the defendants used the Yuga marks not merely to reference Yuga’s NFTs, but as trademarks – that is, to create, promote, and sell their own NFTs. In that case, “[i]t does not matter that Defendants’ ultimate goal may have been criticism and commentary.”[3]

    The Court also rejected the defendants’ argument under the First Amendment that their NFTs were part of an expressive art project and that the “expressive nature” of their use of the Yuga marks entitled them to an exception to trademark infringement for expressive speech. Again, the Court disagreed because this exception does not apply where the defendant uses the marks as source identifiers. “[W]hen a use of the plaintiff’s mark is ‘at least in part for source identification,’ the First Amendment exception to trademark enforcement is foreclosed.”[4]

    Ultimately, the Court reversed the district court’s grant of summary judgment on trademark infringement and cybersquatting claims against the defendants, finding that the likelihood of consumer confusion, which is central to both claims, presents factual disputes that must be resolved at trial. Although the defendants’ satirical use did not establish nominative fair use or protect the use of the marks under the First Amendment, the Court noted that that purpose created “significant questions about whether the likelihood-of-consumer-confusion requirement was satisfied.”

    The panel affirmed the dismissal of the defendants’ counterclaims under the DMCA and for declaratory relief, concluding there was no evidence of knowing misrepresentation or an active copyright dispute.

    Conclusion and Takeaways

    The Ninth Circuit emphasized that “when we apply ‘established legal rules to the totally new problems’ of emerging technologies, our task is ‘not to embarrass the future.’”[5] This decision marks a significant step in adapting traditional intellectual property law to the evolving digital economy. It is a win for brand owners operating in the digital economy, opening the door for them to bring claims against infringing digital goods as they traditionally have against counterfeit products.

    While the Court remanded for a determination of whether the defendants infringed Yuga’s marks, it clarified that NFTs are not exempt from the protections and tenets of trademark law in the Ninth Circuit – NFTs are ‘goods’ under trademark law, and trademark infringement analysis must be applied when those marks are used at least in part as source identifiers by the defendant even with the intention of criticism and satire.

     

    [1] Yuga Labs Inc v. Ryder Ripps, 9th U.S. Circuit Court of Appeals, No. 24-879, Opinion (“Op.”) at 10.

    [2] Op. at 34, quoting E.S.S. Ent. 2000, Inc. v. Rock Star Videos, Inc., 547 F.3d 1095, 1098 (9th Cir. 2008).

    [3] Op. at 36. See Jack Daniel’s Props., Inc. v. VIP Prods. LLC, 599 U.S. 140, 148 (2023) (explaining a defendant does not get the benefit of fair use “even if engaging in parody, criticism, or commentary – when using the similar-looking mark ‘as a designation of source for the [defendant’s] own goods’” (alteration in original) (citation omitted)). See our analysis of the Jack Daniel’s decision here.

    [4] Op. at 41, quoting Jack Daniel’s, 599 U.S at 156. See our analysis of the Jack Daniel’s decision here.

    [5] Op. at 6, quoting TikTok Inc. v. Garland, 604 U.S. –, 145 S. Ct. 57, 62 (2025) (cleaned up and internal quotations omitted).

     

     

  • 2025 CEQA Reforms: What Developers Need to Know

    By Miles Imwalle, Megan Jennings, Elena Neigher, Alyssa Netto, and Craig Spencer

    Governor Gavin Newsom signed two budget trailer bills on June 30, 2025, enacting the most substantial reforms to the California Environmental Quality Act (CEQA) in over five decades. To help you navigate these important changes, we have prepared a three-part summary of budget trailer bills Assembly Bill 130 and Senate Bill 131:

    New CEQA Exemption for Infill Housing Development Projects: What it Means for Developers 

    AB 130 and SB 131 were adopted on the last day of the 2024-25 fiscal year after the Governor made it clear he would not approve the budget without meaningful CEQA reforms. While not the sweeping “rollback” of environmental review that some sources have claimed, the legislation will undoubtedly smooth the road for approval for many infill housing projects. In this post, we focus on the criteria for using the new exemption for housing development projects in AB 130. Read more here.

    “Near-Miss” CEQA Streamlining: New Option to Reduce Scope of Review for Housing Development Projects 

    SB 131 includes a new CEQA process that limits the environmental review required for “near-miss” housing development projects—those projects that meet all criteria for a CEQA exemption, except for a single disqualifying condition. Specifically, the environmental review in these instances is restricted to analyzing impacts stemming exclusively from the single condition that disqualifies the housing project from receiving a statutory or categorical exemption. Read more here.

    CEQA Transportation Mitigation Fees and Other Key Reforms in AB 130 and SB 131 

    In our third update on the important changes in budget trailer bills AB 130 and SB 131, we cover changes to the mitigation options for vehicle miles traveled (VMT), additional focused CEQA exemptions, and other amendments to land use processes. Read more here.

    The Coblentz Real Estate Team has extensive experience with the state’s latest land use laws and can help to navigate their complexities and opportunities. Please contact us for additional information and any questions related to the impact of this legislation on land use and real estate development.

  • California Releases Final Employee Notice on Victim Leave Rights

    By Fred W. Alvarez, Hannah Jones, Dan Bruggebrew, Allison Moser, Paige Pulley, Hannah Withers, and Stacey Zartler

    The California Civil Rights Department (CRD) just released its long-awaited model employee notice triggering a new compliance obligation for all California employers regarding the rights of employees who are victims of qualifying acts of violence. This is a good time to review your policies and onboarding materials to ensure you’re providing this notice to employees now and going forward.

    What’s New?

    Effective immediately, employers must provide notice to employees about their rights to take protected leave and request workplace accommodations if they or their family members are victims of certain crimes. This requirement is tied to Assembly Bill 2499 (codified as Government Code §12945.8), which expanded existing protections and made notice mandatory now that the CRD model notice is available. The model notice is located here: CRD Model Notice

    Who Needs to Comply?

    All California employers, regardless of size, are required to provide this notice.

    If you have 25 or more employees, additional protections apply to employees whose family members are victims of a qualifying act of violence, including a broadly defined list that covers a child, parent, grandparent, grandchild, sibling, spouse, domestic partner, or “designated person” who can be someone related by blood, such as an aunt or uncle, or someone who is equivalent to a family member, such as a best friend. Employers may limit an employee to one “designated person” per 12-month period.

    When and How to Provide the Notice

    The new law requires you to give this notice in four scenarios:

    • At hire – Include it in your onboarding packet effective immediately.
    • Annually – Distribute it to all employees once per year.
    • Upon request – Provide it to any employee who asks.
    • When notified – If an employee tells you they or a family member are a victim of a qualifying crime.

    You can use the CRD’s model notice or create your own version, as long as it’s substantially similar in both content and clarity. If 10% or more of your workforce at a location speaks a language other than English, you’ll need to provide the notice in that language. The CRD has made translated versions available on its website.

    What the Notice Covers

    The notice explains an employee’s rights, including:

    • Job-protected leave for medical care, counseling, safety planning, or legal help related to the incident.
    • Workplace safety accommodations, like schedule changes, reassignment, or security assistance—subject to an interactive process and undue hardship standard.
    • Protection from retaliation for using these rights.
    • Confidentiality of any information shared regarding the incident or related requests.

    It also reminds employees they may be eligible for wage replacement under State Disability Insurance or Paid Family Leave, and may qualify for bereavement leave and other forms of crime victim leave under separate Labor Code provisions and applicable law.

    What You Should Do Now

    Here’s a practical checklist to help you meet your new obligations:

    • Download and review the CRD’s model notice.
    • Add the notice to your onboarding documents and distribute it to current employees annually.
    • Train HR and managers to respond appropriately when employees raise concerns or request time off or accommodations under this law.
    • Be prepared to provide the notice to current employees if they make a request.

    Want More Details? Read the CRD’s FAQ

    The CRD has also published an FAQ document that answers common employer questions about the law and the notice requirement. You can view it here: CRD FAQs

    Here are a few highlights:

    • What is a “qualifying act of violence”?
      It’s broader than domestic violence or sexual assault—it includes any crime that causes physical or mental injury, or the death of a family member.
    • Can we create our own notice instead of using the CRD version?
      Yes, but it must be substantially similar in both content and clarity.
    • Do we have to provide this notice to existing employees immediately? While there isn’t a specific requirement that notice be provided to existing employees immediately, employers need to provide it annually and we recommend rolling this out as soon as practical.
    • What happens if we don’t comply?
      Non-compliance can lead to enforcement action by the CRD, including penalties for failing to provide the notice or interfering with protected leave rights.

    If you’d like support reviewing your materials, preparing communications, or training your team, we’re here to help. Let us know if you’d like the notice translated into your preferred language(s), or if you’d like assistance adapting it into your onboarding materials.

  • Federal Reserve to Implement New ISO 20022 Funds Wiring System

    By Kyle J. Recker and Max Martinez

    Due to the Federal Reserve’s imminent shift to a new funds wiring system (known as ISO 20022), if you have upcoming plans to transfer any amount of funds via wire transfer, confirm with your bank and anyone else handling your funds that they are prepared for the shift to ISO 20022 and can accommodate your wire on the planned date of transfer.

     

    On July 14, 2025, the Federal Reserve plans to implement a new funds wiring and messaging format, ISO 20022, to modernize both domestic and cross-border wire transfers. After three years of development and trials, the Federal Reserve will sunset its existing wiring system, the Fedwire Application Interface Manual (FAIM), which is currently used nationwide by banks, escrow services, and other funds exchange operations to facilitate wiring of funds from one party to another. “ISO” refers to the International Organization for Standardization, and the change to ISO 20022 will align the Federal Reserve wire transfer system with those used in other payments markets, including those of key U.S. trading partners. The ISO 20022 system allows for more detailed information to be included with a wire transfer, which is expected to improve efficiencies in related wire transfer processes and result in faster and more reliable payments. The upgrade should be welcome news to anyone regularly involved in closing transactions that involve the wiring of funds, as the existing FAIM system has been known to cause some consternation due to its lack of transparency and predictability (e.g., the anxiety-ridden waiting period from funds wiring to receipt by the escrow service for a same-day transaction closing).

    The ISO 20022 system underwent customer testing from March to June 2025 (in which ISO 20022 was actually used for certain planned wire transfers in commercial settings), while testing of the system’s online portal interface has been ongoing since March 2023. However, each FAIM user is responsible for developing its own preparedness and contingency plans in connection with the phase-out of FAIM and the implementation of ISO 20022, so there may be some variance among institutions in the smoothness and efficiency of the transition. If you have upcoming plans to transfer any amount of funds via wire transfer, particularly if a large sum, you should confirm with your bank and anyone else handling your funds as to whether they are prepared for the shift to the ISO 20022 system and can accommodate your wire on the planned date of transfer. You should also be prepared for the possibility of wires being delayed due to transitional complications. If possible, it may be prudent to wire funds in advance of any upcoming closings or be prepared to extend or delay a closing date for a few days.

    As always, we encourage you to reach out to us with any questions on this topic or as may be needed in connection with any specific projects.

    Sources:

    https://www.frbservices.org/resources/financial-services/wires/iso-20022-implementation-center
    https://www.frbservices.org/news/communications/061825-fedwire-iso-go
    https://www.frbservices.org/resources/financial-services/wires/iso-20022-implementation-center/fedwire-iso-20022-testing-requirements-key-milestones
    https://www.frbservices.org/resources/financial-services/wires/faq/iso-20022
    https://www.jpmorgan.com/insights/payments/payments-optimization/iso-20022-migration

  • Beyond the FTC: Consumer Class Actions Are Redefining Influencer Marketing Risk

    By Lindsay M. Gehman and Saachi S. Gorinstein

    The influencer marketing ecosystem has evolved into a multibillion-dollar engine of digital commerce, delivering measurable ROI to brands across industries. However, as the industry matures, so too does the legal landscape underpinning it. While many marketers are familiar with the Federal Trade Commission’s (“FTC”) endorsement guidelines, what’s becoming increasingly apparent is that compliance with FTC regulations is no longer enough.

    A growing number of consumer class actions are testing the boundaries of influencer liability under state consumer protection laws. These suits draw on so-called “Little FTC Acts,” which closely mirror federal guidance and give private individuals the right to pursue claims. Although it remains to be seen how successful these lawsuits will be on the merits, the trend suggests that brands and influencers should be watching closely and preparing accordingly. If these suits continue to survive early motions and succeed on the merits, they could encourage more consumers to pursue similar claims, expanding the legal exposure associated with influencer campaigns.

    A New Form of Enforcement: The Revolve Class Action

    The Negreanu v. Revolve lawsuit marks a turning point. Filed in April 2025 in the Central District of California, the $50 million class action alleges that Revolve, an online clothing retailer, paid influencers to promote its clothing on platforms like Instagram and TikTok without adequately disclosing the sponsorships. The plaintiffs claim the posts were presented as personal style recommendations, not advertisements, and lacked clear indicators such as “#ad” or “paid partnership.” The suit cites violations of the FTC endorsement guidelines, Florida’s Deceptive Trade Practices Act, the Consumers Legal Remedies Act, and consumer protection statutes in over 20 states.

    This shift from regulatory oversight to private enforcement is a noteworthy development. It suggests that compliance with FTC guidelines may no longer be sufficient to insulate brands from risk if influencer content is perceived as misleading.

    Influencer Endorsements on Trial: Four Cases to Watch

    Pop v. Lulifama.com (2023) – The Importance of Particularity

    In this case, consumer Alin Pop sued swimwear brand Luli Fama and several influencers for promoting products without disclosing their paid relationships. The court dismissed the case with prejudice, holding that the complaint lacked the specificity required under Rule 9(b). The court found that Mr. Pop failed to identify which specific posts influenced his purchase or to provide evidence that the undisclosed sponsorships led to economic harm. The court also clarified that FTC guidelines (16 C.F.R. § 255.5) are not binding regulations and therefore cannot, on their own, establish a per se violation of Florida’s consumer protection law (FDUTPA).

    Key takeaway: Simply alleging non-disclosure is insufficient. Plaintiffs must link specific misrepresentations to consumer action and economic injury.

    Sava v. 21st Century Spirits (2024) – A Stronger Complaint Survives

    In contrast, the same plaintiff, Alin Pop, joined Mario Sava in a suit against Blue Ice Vodka maker, 21st Century Spirits, and its influencer partners. The plaintiffs alleged that the product was deceptively marketed as “handcrafted,” “low-calorie,” and “fit-friendly,” and that influencers failed to disclose their paid relationships. This time, the court allowed most of the claims to proceed. The plaintiffs provided detailed factual allegations, identifying marketing claims, influencer posts, and specific purchase decisions.

    The court found the plaintiffs had Article III standing, a constitutional threshold for bringing suit in federal court requiring them to plausibly allege a “concrete” and “particularized” injury, based on their claim that they suffered an economic injury – specifically, that they overpaid for a misrepresented product and noted that while FTC guidelines do not carry the force of law, they may inform whether conduct is deceptive under state law.

    Bengoechea v. Shein (2025) – Class Action Momentum Grows

    Filed by consumers Amanda Bengoechea and Makayla Gipe, this suit targets fashion retailer Shein and several influencers for promoting products without clear disclosures. The plaintiffs claim the influencers’ paid relationships were obscured in dense hashtags or hidden behind “see more” links, misleading consumers into thinking the endorsements were genuine. The complaint alleges that the received products were of lower quality than expected and seeks over $500 million in damages.

    Dubreu v. Celsius Holdings (2025) – Targeting Health Claims

    In a similar action, Lauren Dubreu sued energy drink company, Celsius, and three influencers who promoted the product as a fitness-friendly beverage without disclosing compensation. Some posts claimed that Celsius cocktails had “fewer calories than an apple,” a representation the plaintiffs allege was materially misleading. The suit alleges violations of California’s False Advertising Law, Unfair Competition Law, and the Consumers Legal Remedies Act and seeks at least $450 million in damages.

    These cases remain in early stages, but they demonstrate how courts and consumers are beginning to engage more actively with the question of whether influencer marketing is appropriately transparent.

    Understanding the Legal Risk: Why This Matters Now

    These lawsuits reflect a broader redefinition of influencer marketing risk. Courts are increasingly recognizing that influencer endorsements can have a powerful effect on consumer decision-making, particularly when they appear personal or authentic. When the paid nature of that endorsement is hidden or unclear, courts have shown a willingness to find that consumers may have been misled.

    A couple of elements are repeatedly under scrutiny:

    • Whether claims made in the content are objectively misleading or unverifiable.
    • Whether there was a clear, conspicuous disclosure of the material connection between the brand and the influencer.

    As a result, compliance with the FTC’s Endorsement Guides remains a prudent baseline, but it may no longer be the final word. Plaintiffs’ attorneys are testing these boundaries, and courts appear increasingly open to allowing such claims to proceed past initial motions.

    Risk Management: What Brands and Influencers Can Do Now

    While the current wave of litigation is still developing, brands and agencies should view it as a signal to reassess and reinforce their influencer compliance frameworks. Consider taking the following steps:

    • Clarify and Standardize Disclosures. Use prominent, platform-appropriate tags like “#ad” or “sponsored” placed early in the caption. Avoid burying disclosures in dense hashtag blocks or requiring users to click “see more.”
    • Contract Thoughtfully. Influencer agreements should include disclosure obligations aligned with FTC guidelines and applicable state law. Brands and agencies should retain the right to approve posts, especially when specific product claims are made.
    • Monitor and Audit Content. Implement systems for periodically reviewing influencer posts to verify compliance. Screenshots and logs can serve as helpful evidence if a dispute arises.
    • Substantiate All Product Claims. Statements like “handcrafted,” “low calorie,” or “healthier than an apple” must be backed by verifiable data, or avoided entirely. Courts are increasingly looking for objective substantiation, especially in health or pricing claims.
    • Train Internal Teams and Partners. Marketers and legal teams should stay informed about evolving disclosure standards and train influencers accordingly. Missteps are most likely when expectations are unclear or assumed.

    Looking Ahead: A Trend Worth Watching

    While the long-term viability of consumer-led class actions in this space is still unfolding, the early signs point to increased judicial interest in the sufficiency of influencer disclosures. Courts are not yet unanimous in how these cases should be treated, but they are taking them seriously.

    In the meantime, the safest course for brands and agencies is to assume that influencer endorsements are commercial speech, and should be governed accordingly. Building strong, documented compliance procedures is no longer just a best practice – it is a necessary safeguard.

    To view as a PDF, click here.