• Navigating the Shifting AI Landscape: What U.S. Businesses Need to Know in 2025

    By Mari Clifford and Scott Hall

    Artificial intelligence is no longer a wild west frontier technology—it’s a regulated one. As AI systems become central to how companies operate, communicate, and compete, legal oversight is catching up. In 2025, AI governance is defined by divergence: a harmonized, risk-based regime in the EU; a fragmented, reactive framework in the U.S.; and rapid regulatory expansion at the state and global levels. Businesses deploying or developing AI must now navigate a multi-jurisdictional patchwork of laws that carry real compliance, litigation, and reputational consequences.

    This article outlines the key regulatory developments, contrasts the EU and U.S. approaches, and offers concrete recommendations for U.S. companies operating AI systems.

    EU AI Act: Global Reach with Teeth

    The EU AI Act, which entered into force in August 2024, is the world’s first comprehensive, binding legal framework for AI. It classifies systems by risk level—unacceptable, high, limited, and minimal—and imposes extensive obligations on high-risk and general-purpose AI (GPAI) models. High-risk AI systems must undergo pre-market conformity assessments, maintain technical documentation, and register in a public EU database. GPAI models face additional transparency, copyright, and cybersecurity obligations, particularly if they exceed scale thresholds (e.g., >10,000 EU business users).

    The Act’s extraterritorial reach means U.S. companies offering AI products or services in the EU—or whose outputs affect EU residents—must comply. Notably, failure to implement the EU’s “voluntary” GPAI Code of Practice could shift the burden of proof in enforcement actions.

    Timeline to Watch: The law becomes enforceable starting August 2026, with GPAI obligations phasing in from 2025.

    The U.S. Approach: Fragmentation, Tension, and State-Level Acceleration

    Executive Orders & Federal Initiatives

    U.S. federal law remains sectoral and piecemeal. President Biden’s 2023 Executive Order on “Safe, Secure, and Trustworthy AI” established guiding principles, including fairness, transparency, and privacy protections, and tasked agencies with issuing AI-specific standards. However, this was rescinded in 2025 by the Trump administration’s new EO prioritizing deregulation and “American leadership in AI,“ creating a sharp policy pivot and regulatory uncertainty. In parallel, the administration also unveiled a draft AI Action Plan, emphasizing voluntary industry standards and innovation incentives over binding rules. While still in flux, this initiative further underscores the unsettled political climate around federal AI policy.

    While bills like the AI Accountability Act and the SAFE Innovation Framework have been proposed, no comprehensive federal AI law has passed. Instead, federal agencies like the FTC, EEOC, and CFPB continue to regulate AI through existing consumer protection and civil rights laws—often through enforcement actions rather than formal rulemaking.

    State Spotlight: Colorado, California, and Others Lead the Way

    Absent a comprehensive federal law, states have moved decisively. The list below highlights a representative sample of enacted state AI statutes as of July 2025; dozens of additional bills are pending and advancing every legislative cycle:

    Arizona

    • HB 2175 – requires health-insurer medical directors to personally review any claim denial or prior-authorization decision that relied on AI, exercising independent medical judgment (in force on June 30, 2026).

    California

    • AB 1008 – expands the CCPA definition of “personal information” to cover data handled or output by AI.
    • AB 1836 – bars commercial use of digital replicas of deceased performers without estate consent.
    • AB 2013 – requires AI developers to post detailed training-data documentation.
    • AB 2885 – creates a uniform statutory definition of “artificial intelligence” (effective January 1, 2025).
    • AB 3030 – mandates clear gen-AI disclaimers in patient communications from health-care entities (effective January 1, 2025).
    • SB 1001 “BOT” Act – online bots that try to sell or influence votes must self-identify.
    • SB 942 AI Transparency Act – platforms with >1M monthly users must label AI-generated content and provide a public detection tool.

    Colorado

    • SB 24-205 Colorado AI Act – first comprehensive U.S. framework for “high-risk” AI; imposes reasonable-care, impact-assessment, and notice duties on developers and deployers (effective 2026).
    • SB 21-169 – bans unfair discrimination by insurers through algorithms or predictive models.
    • HB 23-1147 – requires deep-fake disclaimers in election communications.
    • Colorado Privacy Act – consumers may opt out of AI “profiling” that produces legal or similarly significant effects; DPIAs required for such processing.

    New York

    • New York CityLocal Law 144 – employers using automated employment-decision tools must obtain an annual independent bias audit and post a summary.

    Tennessee

    • HB 1181Tennessee Information Protection Act (2024) – statewide privacy law; impact assessments required for AI profiling posing significant risks.
    • “ELVIS Act” (2024) – makes voice mimicry by AI without permission a Class A misdemeanor and grants a civil cause of action.

    Texas

    • Texas Data Privacy and Security Act – lets Texans opt out of AI profiling that has significant effects and compels risk assessments for such uses.

    Utah

    • SB 149 “AI Policy Act” (amended by SB 226) – requires disclosure when consumers interact with generative-AI chat or voice systems and sets professional-licensing guardrails.
    • HB 452“Artificial Intelligence Applications Relating to Mental Health” – regulates the use of mental health chatbots that employ artificial intelligence (AI) technology.

    Expect additional Colorado-style comprehensive AI frameworks to surface in 2025-26 as states continue to fill the federal gap.

    Global Developments & Cross-Border Tensions

    Beyond the EU and U.S., countries like Brazil, China, Canada, and the U.K. are advancing AI governance through a mix of regulation and voluntary standards. Notably:

    • China mandates registration and labeling of AI-generated content.
    • Brazil is poised to pass a GDPR- and EU AI Act-style law.
    • The U.K. continues to favor a principles-based, regulator-led approach but may pivot toward binding regulation.

    U.S.-EU divergence has triggered geopolitical friction. The EU’s upcoming GPAI Code of Practice is a flashpoint, with U.S. officials warning it could disproportionately burden American firms. Meanwhile, the U.S. may reconsider participation in multilateral frameworks like the Council of Europe’s AI Treaty.

    A Compliance Playbook for 2025

    AI legal exposure increasingly mirrors privacy law: patchwork rules, aggressive enforcement, and high reputational stakes. To mitigate risk, companies should:

    • Inventory AI Systems: Identify all AI tools in use—especially those making or influencing decisions in high-risk sectors (HR, healthcare, finance, etc.).
    • Conduct Risk Assessments: For GPAI or high-risk tools, assess training data, bias exposure, and explainability. Use frameworks like NIST’s AI RMF or the EU’s conformity checklist.
    • Build Cross-Functional Governance: Legal, compliance, technical, and product teams must coordinate. Assign AI risk ownership and create change triggers for reclassification (e.g., changes in use or scale).
    • Monitor State and Federal Law Developments.
    • Plan for EU Market Entry: Determine whether EU-facing AI systems require local representation, registration, or conformity assessment under the AI Act.
    • Audit Communications: Avoid AI-washing. Public statements about capabilities, safety, or human oversight must match internal documentation and performance.

    The message from global regulators is clear: innovation is welcome, but governance is non-negotiable. Whether operating domestically or globally, businesses must prepare for AI compliance to become a core legal discipline, akin to privacy or cybersecurity.

    For legal teams and compliance leaders, now is the time to move from principles to programs—and to see governance as a competitive advantage, not just a regulatory burden.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com or Mari Clifford at mclifford@coblentzlaw.com for further information or assistance.

  • 2025 Mid-Year Privacy Report

    A Comprehensive Look at New Developments in Data Privacy Laws

    By Scott Hall, Mari Clifford, Leeza Arbatman, Kat Gianelli, Saachi Gorinstein, and Hunter Moss

    Download a PDF version of this report here.

    In 2025, privacy and AI regulation have moved from the sidelines to the center of business risk and strategy. U.S. states are rapidly enacting a patchwork of privacy laws, with new AI laws emerging and expected to increase. Meanwhile, regulators are tightening oversight of automated decision making, children’s data, health metrics, and cross-border data transfers. And litigation over online data collection by companies continues to expand under various statutes, including wiretapping and pen register claims under the California Invasion of Privacy Act (CIPA), and claims under the Video Privacy Protection Act (VPPA), resulting in diverging court rulings that send mixed signals to companies regarding privacy compliance.

    Our Mid-Year Privacy Report examines the most significant developments shaping the privacy and AI landscape in 2025 and highlights practical steps businesses can take to navigate an increasingly complex, multi-jurisdictional legal landscape.

    You can download the full report here. If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

  • New California Regulations Regarding AI Use in Hiring and Employment

    By Fred Alvarez and Hannah Withers 

    If your company is using AI or other automated decision making systems to make employment or hiring decisions, a new set of regulations in California will be going into effect on October 1, 2025 that will require your attention.

    Issued by the California Civil Rights Council (CRC), these regulations define what types of AI systems are being regulated, what types of employment decisions are included, and what employers need to do to prevent discrimination claims resulting from the use of these AI tools. If you (or your agents) are using AI in any manner related to employment decisions, we suggest familiarizing yourself with the key aspects of the regulations, which are summarized below.

    The purpose of the regulations is to respond to concerns about the potentially discriminatory impact of AI tools in employment and to make clear that California’s anti-discrimination laws still apply even when employers are using AI tools to make decisions. The regulations make clear that an employer cannot escape liability for discriminatory hiring or employment decisions based on the fact that the decision was made by an AI generated tool. Rather, even when just using AI tools, companies may still be liable for discriminatory hiring or employment decisions.

    Key Sections of the Regulations to be Familiar With:

    • An “Automated-decision system” is defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit… [it] may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
    • The covered employment and hiring practices are quite broad and include decisions related to recruitment, applicant screening, background checks, hiring, promotion, transfer, pay, benefit eligibility, leave eligibility, employee placement, medical and psychological examination, training program selection, or any condition or privilege of employment.
    • An “agent” includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. An agent of an employer is also an ‘employer’ for purposes of the Act.”
    • Under the rules, it is unlawful to use AI systems that result in employment discrimination based on protected characteristics (e.g. religion, race, gender, disability, national original, age, etc.) The rules also specify that the use of technology that has an adverse impact is “unlawful unless job-related and consistent with business necessity and the technology includes a mechanism for requesting an accommodation”.
    • There is a 4 year record retention requirement for employment records created or received (including applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data) dealing with a covered employment practice or employment benefit.

    Here is What We Recommend Considering and Preparing For:

    • Determine if you are using any AI systems that should be evaluated for possible discriminatory impact. Consider whether you use any of the following systems with employees or applicants:
      • Computer-based assessments or tests, such as questions, puzzles, games, or other challenges.
        Automated systems to direct job advertisements or other recruiting materials to targeted groups.
      • Automated systems to screen resumes for particular terms or patterns.
      • Automated systems to analyze facial expression, word choice, and/or voice in online interviews.
      • Automated systems to analyze employee or applicant data acquired from third parties.
    • If you use an automated decision making system, examine what data is being collected, how decisions are being made, and any possible discriminatory impact that might be made as a result. Consider how applicants or employees in each protected group might be impacted by the way the automated systems are set up and implemented. (e.g., The use of online application technology that limits, or screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation.)
    • Investigate how the automated decision making systems you are using check for bias on behalf of protected classifications. Efforts to check for bias can be used as a defense to claims of discrimination. To check for bias you should investigate whether the AI tool:
      • Monitors for unintended discriminatory effects during the ongoing use of the software.
      • Conducts live bias tracking functionality as decisions are being made.
      • Supports compliance with anti-discrimination requirements in states that require annual independent bias audits of AI tools in hiring.
      • Supports customer compliance with applicant notice requirements in states that have laws relating to the use of AI tools in hiring.
      • Maintains records of hiring decisions and AI processes conducted.
      • Has materials or a white paper regarding employment and privacy law compliance.
      • Has measures in place to prevent discriminatory outcomes in its algorithm design and outputs.
    • If you work with third parties or other agents for hiring or employment decisions, talk to them about what AI tools they are using and learn more about what information those tools gather and how they make decisions. You are responsible for how your agent is using AI tools and you should communicate with agents specifically about their compliance with these California regulations.
    • Review your accommodations policies to make sure that any automated systems being used are not operating in a way that would miss the need for an accommodation.
    • Review your record retention practices for employment and hiring decisions to make sure that you keep records of hiring or employment decisions made using AI systems for 4 years.

     

    The Coblentz Employment team is available to answer any questions you may have about the impact of these regulations and how to prepare logistically ahead of their effective date on October 1, 2025. For additional information, you may also refer to the CRC’s press release.

  • Monkey Business No More: Ninth Circuit Rules NFTs Are Protected by Trademark Law, Confirms the Limits of Expressive Speech Protection, but Overturns Judgment of Likely Confusion

    By Sabrina Larson and Kat Gianelli

    Key Takeaways

    • The Ninth Circuit confirmed that non-fungible tokens (NFTs) are ‘goods’ under the Lanham Act and can be protected by trademark law.
    • Even if a defendant uses a trademark owner’s mark with the goal of commentary and criticism, the fair use doctrine will not protect that use where the defendant uses the mark to designate its own goods.
    • The First Amendment does not protect a defendant’s unlicensed use of a trademark when the use of the mark is at least partially acting as a source identifier, even if the defendant intended such use as satire and expressive speech.
    • The decision is a win for brand owners promoting digital assets.

    The Ninth Circuit ruled in Yuga Labs, Inc. v. Ryder Ripps on July 23, 2025 that non-fungible tokens (NFTs) are eligible for trademark protection under the Lanham Act, a significant development for creators of digital tokens. The Court also confirmed the limits of protection for satirical, expressive speech protection, where the defendant nonetheless uses the plaintiff’s trademarks as source identifiers.

    The Court, however, overturned the lower court’s $8.8 million judgment for Yuga, finding that Yuga had not proven at summary judgment that the defendants’ tokens are likely to confuse NFT buyers.

    Background

    Yuga Labs, who created the NFT “Bored Ape Yacht Club,” sued artists Ryder Ripps and Jeremy Cahen for creating a nearly identical NFT titled “Ryder Ripps Bored Ape Yacht Club,” which was tied to the same ape images as Yuga’s NFTs. Yuga alleged trademark infringement and unlawful cybersquatting.

    Examples of Yuga’s Bored Ape NFTs[1]

    The defendants claimed their project was a satirical protest, and countersued alleging violation of the Digital Millennium Copyright Act (DMCA) and sought declaratory relief that Yuga had no copyright protections over Bored Apes.

    The district court granted summary judgment for Yuga on its trademark infringement claim and anti-cybersquatting claim, and also granted summary judgment for Yuga with regards to the defendants’ DMCA counterclaim, resulting in an $8.8 million judgment for Yuga, which the artists appealed.

    Ninth Circuit Analysis and Decision

    One of the defendants’ defenses was to argue that NFTs are not ‘goods’ under the Lanham Act, but the Ninth Circuit disagreed, holding that NFTs are protectable as ‘goods’ under the Lanham Act and affirming that Yuga’s “Bored Ape Yacht Club” trademarks are enforceable despite their digital nature. This conclusion aligns with the U.S. Patent & Trademark Office, which has also concluded that NFTs are ‘goods.’ The Court reasoned that NFTs are “more than a digital deed to or authentication of artwork” because they “also function as membership passes, providing ‘Ape holders’ with exclusive access to online and offline social clubs, branded merchandise, interactive digital spaces, and celebrity events.” The Court concluded, “Yuga’s NFTs are not merely monkey business and can be trademarked.”

    The defendants also argued that they made nominative fair use of the Yuga marks. A common example of fair use is where one “‘deliberately uses another’s trademark or trade dress for the purposes of comparison, criticism, or point of reference.’”[2] The Court disagreed because the defendants used the Yuga marks not merely to reference Yuga’s NFTs, but as trademarks – that is, to create, promote, and sell their own NFTs. In that case, “[i]t does not matter that Defendants’ ultimate goal may have been criticism and commentary.”[3]

    The Court also rejected the defendants’ argument under the First Amendment that their NFTs were part of an expressive art project and that the “expressive nature” of their use of the Yuga marks entitled them to an exception to trademark infringement for expressive speech. Again, the Court disagreed because this exception does not apply where the defendant uses the marks as source identifiers. “[W]hen a use of the plaintiff’s mark is ‘at least in part for source identification,’ the First Amendment exception to trademark enforcement is foreclosed.”[4]

    Ultimately, the Court reversed the district court’s grant of summary judgment on trademark infringement and cybersquatting claims against the defendants, finding that the likelihood of consumer confusion, which is central to both claims, presents factual disputes that must be resolved at trial. Although the defendants’ satirical use did not establish nominative fair use or protect the use of the marks under the First Amendment, the Court noted that that purpose created “significant questions about whether the likelihood-of-consumer-confusion requirement was satisfied.”

    The panel affirmed the dismissal of the defendants’ counterclaims under the DMCA and for declaratory relief, concluding there was no evidence of knowing misrepresentation or an active copyright dispute.

    Conclusion and Takeaways

    The Ninth Circuit emphasized that “when we apply ‘established legal rules to the totally new problems’ of emerging technologies, our task is ‘not to embarrass the future.’”[5] This decision marks a significant step in adapting traditional intellectual property law to the evolving digital economy. It is a win for brand owners operating in the digital economy, opening the door for them to bring claims against infringing digital goods as they traditionally have against counterfeit products.

    While the Court remanded for a determination of whether the defendants infringed Yuga’s marks, it clarified that NFTs are not exempt from the protections and tenets of trademark law in the Ninth Circuit – NFTs are ‘goods’ under trademark law, and trademark infringement analysis must be applied when those marks are used at least in part as source identifiers by the defendant even with the intention of criticism and satire.

     

    [1] Yuga Labs Inc v. Ryder Ripps, 9th U.S. Circuit Court of Appeals, No. 24-879, Opinion (“Op.”) at 10.

    [2] Op. at 34, quoting E.S.S. Ent. 2000, Inc. v. Rock Star Videos, Inc., 547 F.3d 1095, 1098 (9th Cir. 2008).

    [3] Op. at 36. See Jack Daniel’s Props., Inc. v. VIP Prods. LLC, 599 U.S. 140, 148 (2023) (explaining a defendant does not get the benefit of fair use “even if engaging in parody, criticism, or commentary – when using the similar-looking mark ‘as a designation of source for the [defendant’s] own goods’” (alteration in original) (citation omitted)). See our analysis of the Jack Daniel’s decision here.

    [4] Op. at 41, quoting Jack Daniel’s, 599 U.S at 156. See our analysis of the Jack Daniel’s decision here.

    [5] Op. at 6, quoting TikTok Inc. v. Garland, 604 U.S. –, 145 S. Ct. 57, 62 (2025) (cleaned up and internal quotations omitted).

     

     

  • 2025 CEQA Reforms: What Developers Need to Know

    By Miles Imwalle, Megan Jennings, Elena Neigher, Alyssa Netto, and Craig Spencer

    Governor Gavin Newsom signed two budget trailer bills on June 30, 2025, enacting the most substantial reforms to the California Environmental Quality Act (CEQA) in over five decades. To help you navigate these important changes, we have prepared a three-part summary of budget trailer bills Assembly Bill 130 and Senate Bill 131:

    New CEQA Exemption for Infill Housing Development Projects: What it Means for Developers 

    AB 130 and SB 131 were adopted on the last day of the 2024-25 fiscal year after the Governor made it clear he would not approve the budget without meaningful CEQA reforms. While not the sweeping “rollback” of environmental review that some sources have claimed, the legislation will undoubtedly smooth the road for approval for many infill housing projects. In this post, we focus on the criteria for using the new exemption for housing development projects in AB 130. Read more here.

    “Near-Miss” CEQA Streamlining: New Option to Reduce Scope of Review for Housing Development Projects 

    SB 131 includes a new CEQA process that limits the environmental review required for “near-miss” housing development projects—those projects that meet all criteria for a CEQA exemption, except for a single disqualifying condition. Specifically, the environmental review in these instances is restricted to analyzing impacts stemming exclusively from the single condition that disqualifies the housing project from receiving a statutory or categorical exemption. Read more here.

    CEQA Transportation Mitigation Fees and Other Key Reforms in AB 130 and SB 131 

    In our third update on the important changes in budget trailer bills AB 130 and SB 131, we cover changes to the mitigation options for vehicle miles traveled (VMT), additional focused CEQA exemptions, and other amendments to land use processes. Read more here.

    The Coblentz Real Estate Team has extensive experience with the state’s latest land use laws and can help to navigate their complexities and opportunities. Please contact us for additional information and any questions related to the impact of this legislation on land use and real estate development.

  • California Releases Final Employee Notice on Victim Leave Rights

    By Fred W. Alvarez, Hannah Jones, Dan Bruggebrew, Allison Moser, Paige Pulley, Hannah Withers, and Stacey Zartler

    The California Civil Rights Department (CRD) just released its long-awaited model employee notice triggering a new compliance obligation for all California employers regarding the rights of employees who are victims of qualifying acts of violence. This is a good time to review your policies and onboarding materials to ensure you’re providing this notice to employees now and going forward.

    What’s New?

    Effective immediately, employers must provide notice to employees about their rights to take protected leave and request workplace accommodations if they or their family members are victims of certain crimes. This requirement is tied to Assembly Bill 2499 (codified as Government Code §12945.8), which expanded existing protections and made notice mandatory now that the CRD model notice is available. The model notice is located here: CRD Model Notice

    Who Needs to Comply?

    All California employers, regardless of size, are required to provide this notice.

    If you have 25 or more employees, additional protections apply to employees whose family members are victims of a qualifying act of violence, including a broadly defined list that covers a child, parent, grandparent, grandchild, sibling, spouse, domestic partner, or “designated person” who can be someone related by blood, such as an aunt or uncle, or someone who is equivalent to a family member, such as a best friend. Employers may limit an employee to one “designated person” per 12-month period.

    When and How to Provide the Notice

    The new law requires you to give this notice in four scenarios:

    • At hire – Include it in your onboarding packet effective immediately.
    • Annually – Distribute it to all employees once per year.
    • Upon request – Provide it to any employee who asks.
    • When notified – If an employee tells you they or a family member are a victim of a qualifying crime.

    You can use the CRD’s model notice or create your own version, as long as it’s substantially similar in both content and clarity. If 10% or more of your workforce at a location speaks a language other than English, you’ll need to provide the notice in that language. The CRD has made translated versions available on its website.

    What the Notice Covers

    The notice explains an employee’s rights, including:

    • Job-protected leave for medical care, counseling, safety planning, or legal help related to the incident.
    • Workplace safety accommodations, like schedule changes, reassignment, or security assistance—subject to an interactive process and undue hardship standard.
    • Protection from retaliation for using these rights.
    • Confidentiality of any information shared regarding the incident or related requests.

    It also reminds employees they may be eligible for wage replacement under State Disability Insurance or Paid Family Leave, and may qualify for bereavement leave and other forms of crime victim leave under separate Labor Code provisions and applicable law.

    What You Should Do Now

    Here’s a practical checklist to help you meet your new obligations:

    • Download and review the CRD’s model notice.
    • Add the notice to your onboarding documents and distribute it to current employees annually.
    • Train HR and managers to respond appropriately when employees raise concerns or request time off or accommodations under this law.
    • Be prepared to provide the notice to current employees if they make a request.

    Want More Details? Read the CRD’s FAQ

    The CRD has also published an FAQ document that answers common employer questions about the law and the notice requirement. You can view it here: CRD FAQs

    Here are a few highlights:

    • What is a “qualifying act of violence”?
      It’s broader than domestic violence or sexual assault—it includes any crime that causes physical or mental injury, or the death of a family member.
    • Can we create our own notice instead of using the CRD version?
      Yes, but it must be substantially similar in both content and clarity.
    • Do we have to provide this notice to existing employees immediately? While there isn’t a specific requirement that notice be provided to existing employees immediately, employers need to provide it annually and we recommend rolling this out as soon as practical.
    • What happens if we don’t comply?
      Non-compliance can lead to enforcement action by the CRD, including penalties for failing to provide the notice or interfering with protected leave rights.

    If you’d like support reviewing your materials, preparing communications, or training your team, we’re here to help. Let us know if you’d like the notice translated into your preferred language(s), or if you’d like assistance adapting it into your onboarding materials.

  • Federal Reserve to Implement New ISO 20022 Funds Wiring System

    By Kyle J. Recker and Max Martinez

    Due to the Federal Reserve’s imminent shift to a new funds wiring system (known as ISO 20022), if you have upcoming plans to transfer any amount of funds via wire transfer, confirm with your bank and anyone else handling your funds that they are prepared for the shift to ISO 20022 and can accommodate your wire on the planned date of transfer.

     

    On July 14, 2025, the Federal Reserve plans to implement a new funds wiring and messaging format, ISO 20022, to modernize both domestic and cross-border wire transfers. After three years of development and trials, the Federal Reserve will sunset its existing wiring system, the Fedwire Application Interface Manual (FAIM), which is currently used nationwide by banks, escrow services, and other funds exchange operations to facilitate wiring of funds from one party to another. “ISO” refers to the International Organization for Standardization, and the change to ISO 20022 will align the Federal Reserve wire transfer system with those used in other payments markets, including those of key U.S. trading partners. The ISO 20022 system allows for more detailed information to be included with a wire transfer, which is expected to improve efficiencies in related wire transfer processes and result in faster and more reliable payments. The upgrade should be welcome news to anyone regularly involved in closing transactions that involve the wiring of funds, as the existing FAIM system has been known to cause some consternation due to its lack of transparency and predictability (e.g., the anxiety-ridden waiting period from funds wiring to receipt by the escrow service for a same-day transaction closing).

    The ISO 20022 system underwent customer testing from March to June 2025 (in which ISO 20022 was actually used for certain planned wire transfers in commercial settings), while testing of the system’s online portal interface has been ongoing since March 2023. However, each FAIM user is responsible for developing its own preparedness and contingency plans in connection with the phase-out of FAIM and the implementation of ISO 20022, so there may be some variance among institutions in the smoothness and efficiency of the transition. If you have upcoming plans to transfer any amount of funds via wire transfer, particularly if a large sum, you should confirm with your bank and anyone else handling your funds as to whether they are prepared for the shift to the ISO 20022 system and can accommodate your wire on the planned date of transfer. You should also be prepared for the possibility of wires being delayed due to transitional complications. If possible, it may be prudent to wire funds in advance of any upcoming closings or be prepared to extend or delay a closing date for a few days.

    As always, we encourage you to reach out to us with any questions on this topic or as may be needed in connection with any specific projects.

    Sources:

    https://www.frbservices.org/resources/financial-services/wires/iso-20022-implementation-center
    https://www.frbservices.org/news/communications/061825-fedwire-iso-go
    https://www.frbservices.org/resources/financial-services/wires/iso-20022-implementation-center/fedwire-iso-20022-testing-requirements-key-milestones
    https://www.frbservices.org/resources/financial-services/wires/faq/iso-20022
    https://www.jpmorgan.com/insights/payments/payments-optimization/iso-20022-migration

  • Beyond the FTC: Consumer Class Actions Are Redefining Influencer Marketing Risk

    By Lindsay M. Gehman and Saachi S. Gorinstein

    The influencer marketing ecosystem has evolved into a multibillion-dollar engine of digital commerce, delivering measurable ROI to brands across industries. However, as the industry matures, so too does the legal landscape underpinning it. While many marketers are familiar with the Federal Trade Commission’s (“FTC”) endorsement guidelines, what’s becoming increasingly apparent is that compliance with FTC regulations is no longer enough.

    A growing number of consumer class actions are testing the boundaries of influencer liability under state consumer protection laws. These suits draw on so-called “Little FTC Acts,” which closely mirror federal guidance and give private individuals the right to pursue claims. Although it remains to be seen how successful these lawsuits will be on the merits, the trend suggests that brands and influencers should be watching closely and preparing accordingly. If these suits continue to survive early motions and succeed on the merits, they could encourage more consumers to pursue similar claims, expanding the legal exposure associated with influencer campaigns.

    A New Form of Enforcement: The Revolve Class Action

    The Negreanu v. Revolve lawsuit marks a turning point. Filed in April 2025 in the Central District of California, the $50 million class action alleges that Revolve, an online clothing retailer, paid influencers to promote its clothing on platforms like Instagram and TikTok without adequately disclosing the sponsorships. The plaintiffs claim the posts were presented as personal style recommendations, not advertisements, and lacked clear indicators such as “#ad” or “paid partnership.” The suit cites violations of the FTC endorsement guidelines, Florida’s Deceptive Trade Practices Act, the Consumers Legal Remedies Act, and consumer protection statutes in over 20 states.

    This shift from regulatory oversight to private enforcement is a noteworthy development. It suggests that compliance with FTC guidelines may no longer be sufficient to insulate brands from risk if influencer content is perceived as misleading.

    Influencer Endorsements on Trial: Four Cases to Watch

    Pop v. Lulifama.com (2023) – The Importance of Particularity

    In this case, consumer Alin Pop sued swimwear brand Luli Fama and several influencers for promoting products without disclosing their paid relationships. The court dismissed the case with prejudice, holding that the complaint lacked the specificity required under Rule 9(b). The court found that Mr. Pop failed to identify which specific posts influenced his purchase or to provide evidence that the undisclosed sponsorships led to economic harm. The court also clarified that FTC guidelines (16 C.F.R. § 255.5) are not binding regulations and therefore cannot, on their own, establish a per se violation of Florida’s consumer protection law (FDUTPA).

    Key takeaway: Simply alleging non-disclosure is insufficient. Plaintiffs must link specific misrepresentations to consumer action and economic injury.

    Sava v. 21st Century Spirits (2024) – A Stronger Complaint Survives

    In contrast, the same plaintiff, Alin Pop, joined Mario Sava in a suit against Blue Ice Vodka maker, 21st Century Spirits, and its influencer partners. The plaintiffs alleged that the product was deceptively marketed as “handcrafted,” “low-calorie,” and “fit-friendly,” and that influencers failed to disclose their paid relationships. This time, the court allowed most of the claims to proceed. The plaintiffs provided detailed factual allegations, identifying marketing claims, influencer posts, and specific purchase decisions.

    The court found the plaintiffs had Article III standing, a constitutional threshold for bringing suit in federal court requiring them to plausibly allege a “concrete” and “particularized” injury, based on their claim that they suffered an economic injury – specifically, that they overpaid for a misrepresented product and noted that while FTC guidelines do not carry the force of law, they may inform whether conduct is deceptive under state law.

    Bengoechea v. Shein (2025) – Class Action Momentum Grows

    Filed by consumers Amanda Bengoechea and Makayla Gipe, this suit targets fashion retailer Shein and several influencers for promoting products without clear disclosures. The plaintiffs claim the influencers’ paid relationships were obscured in dense hashtags or hidden behind “see more” links, misleading consumers into thinking the endorsements were genuine. The complaint alleges that the received products were of lower quality than expected and seeks over $500 million in damages.

    Dubreu v. Celsius Holdings (2025) – Targeting Health Claims

    In a similar action, Lauren Dubreu sued energy drink company, Celsius, and three influencers who promoted the product as a fitness-friendly beverage without disclosing compensation. Some posts claimed that Celsius cocktails had “fewer calories than an apple,” a representation the plaintiffs allege was materially misleading. The suit alleges violations of California’s False Advertising Law, Unfair Competition Law, and the Consumers Legal Remedies Act and seeks at least $450 million in damages.

    These cases remain in early stages, but they demonstrate how courts and consumers are beginning to engage more actively with the question of whether influencer marketing is appropriately transparent.

    Understanding the Legal Risk: Why This Matters Now

    These lawsuits reflect a broader redefinition of influencer marketing risk. Courts are increasingly recognizing that influencer endorsements can have a powerful effect on consumer decision-making, particularly when they appear personal or authentic. When the paid nature of that endorsement is hidden or unclear, courts have shown a willingness to find that consumers may have been misled.

    A couple of elements are repeatedly under scrutiny:

    • Whether claims made in the content are objectively misleading or unverifiable.
    • Whether there was a clear, conspicuous disclosure of the material connection between the brand and the influencer.

    As a result, compliance with the FTC’s Endorsement Guides remains a prudent baseline, but it may no longer be the final word. Plaintiffs’ attorneys are testing these boundaries, and courts appear increasingly open to allowing such claims to proceed past initial motions.

    Risk Management: What Brands and Influencers Can Do Now

    While the current wave of litigation is still developing, brands and agencies should view it as a signal to reassess and reinforce their influencer compliance frameworks. Consider taking the following steps:

    • Clarify and Standardize Disclosures. Use prominent, platform-appropriate tags like “#ad” or “sponsored” placed early in the caption. Avoid burying disclosures in dense hashtag blocks or requiring users to click “see more.”
    • Contract Thoughtfully. Influencer agreements should include disclosure obligations aligned with FTC guidelines and applicable state law. Brands and agencies should retain the right to approve posts, especially when specific product claims are made.
    • Monitor and Audit Content. Implement systems for periodically reviewing influencer posts to verify compliance. Screenshots and logs can serve as helpful evidence if a dispute arises.
    • Substantiate All Product Claims. Statements like “handcrafted,” “low calorie,” or “healthier than an apple” must be backed by verifiable data, or avoided entirely. Courts are increasingly looking for objective substantiation, especially in health or pricing claims.
    • Train Internal Teams and Partners. Marketers and legal teams should stay informed about evolving disclosure standards and train influencers accordingly. Missteps are most likely when expectations are unclear or assumed.

    Looking Ahead: A Trend Worth Watching

    While the long-term viability of consumer-led class actions in this space is still unfolding, the early signs point to increased judicial interest in the sufficiency of influencer disclosures. Courts are not yet unanimous in how these cases should be treated, but they are taking them seriously.

    In the meantime, the safest course for brands and agencies is to assume that influencer endorsements are commercial speech, and should be governed accordingly. Building strong, documented compliance procedures is no longer just a best practice – it is a necessary safeguard.

    To view as a PDF, click here.

  • California Assembly Weighs Nation’s Broadest AI-Driven Workplace Surveillance Bill: AB 1221 Raises the Bar, and the Stakes, for Employers

    By Mari Clifford and Scott Hall 

    In a move that could reshape day-to-day people-management practices across the state, the California Legislature is advancing Assembly Bill 1221 (“AB 1221”), a sweeping proposal that would regulate how employers deploy artificial intelligence-enabled monitoring tools and how they handle the torrents of data those tools generate. After clearing two policy committees, the measure was placed on the Assembly Appropriations Committee’s “suspense” file on May 14, 2025: a key fiscal hurdle to a possible floor vote. AB 1221’s fiscal impact will be scrutinized in the Appropriations Committee, and the bill could still be amended; perhaps to narrow its scope or clarify open questions such as what constitutes a “significant update” to an existing tool. Nonetheless, the measure enjoys strong labor support and dovetails with California’s broader push to regulate AI. Even if AB 1221 stalls, its core concepts are likely to resurface.

    What AB 1221 Would Require

    The bill defines a “workplace surveillance tool” broadly to include virtually any technology that actively or passively captures worker data, from innocuous time-tracking widgets to sophisticated photo-optical systems. It would obligate employers (public and private, large and small, as well as their labor-contractor intermediaries) to furnish plain-language written notice at least thirty days before launching any such tool. That notice must spell out the categories of data collected, the business purpose, the frequency and duration of monitoring, retention periods, vendor identities, the extent to which the data informs employment decisions, and the process by which workers may access or correct that data.

    Once a surveillance system is up and running, it may collect, use, and retain information that is “reasonably necessary and proportionate” to the purpose identified in the notice, and employers bear joint liability for security breaches involving worker data. Contracts with analytics providers therefore must incorporate robust cybersecurity safeguards, cooperation duties and deletion obligations. Vendors must return worker data “in a user-friendly format” at contract end and delete remaining copies.

    AB 1221 would prohibit facial recognition, gait analysis, emotion detection and neural-data collection, but with one narrow carve-out: facial recognition may still be used solely to unlock a device or grant access to a locked or secured area. The bill also bars employers from using surveillance to infer protected traits such as immigration status, health or reproductive history, religion, sexual orientation, disability, criminal record or credit history.

    Employers may not rely primarily on monitoring data when disciplining or terminating a worker. If they choose to factor that data into such a decision, a human reviewer must corroborate it. The employer must notify the worker of the decision, provide a simple request form, and give the worker five business days to ask for the surveillance and corroborating records. Any valid correction must be made, and the personnel action adjusted, within twenty-four hours. Records that play any role in discipline must be retained for five years.

    Enforcement Mechanisms and Civil Exposure

    AB 1221 would vest enforcement authority in the Labor Commissioner, impose civil penalties of $500 per violation, and create a private right of action that includes actual and punitive damages as well as attorneys’ fees and punitive damages. Public prosecutors could also bring suit, and plaintiffs could seek injunctive relief, heightening litigation leverage for worker-side counsel.

    Points of Contention and Legislative Headwinds

    Industry groups, including the California chapter of SHRM, have criticized the proposal’s breadth, warning that it could hamper legitimate safety and operational uses of technology and saddle businesses with ambiguous compliance obligations. Labor advocates counter that AB 1221 supplies essential guardrails against what they describe as an exploding “digital Taylorism” that erodes privacy and exacerbates bias.

    Practical Implications for Employers

    If enacted, the bill would force employers to inventory every monitoring technology—no matter how routine—and to recalibrate vendor contracts, internal policies and disciplinary protocols. Multistate employers that already comply with New York City’s automated-employment-decision rules or the EU’s AI Act would confront new obligations around thirty-day advance notice, categorical technology bans and accelerated employee-data-access timelines. Because the measure’s private right of action is untethered to data-breach harm, plaintiffs’ lawyers would gain a fresh litigation hook wherever monitoring intersects with hiring, promotion or termination decisions.

    Takeaways

    Employers should begin mapping every data stream generated by workplace technologies, updating privacy notices and embedding human review into any algorithmically informed employment decision. Whether AB 1221 becomes law this session or next, the legislative trajectory is clear: AI-powered surveillance is migrating from operational convenience to regulated activity, and businesses that fail to get ahead of these requirements risk both regulatory penalties and private lawsuits.

     

  • Legal Strategies for Wineries Facing Challenges, Including Tariffs

    Coblentz partner Brandi Brown addressed questions regarding legal avenues available to mitigate shifting policies on tariffs, trade, and import-export regulations and steps vintners and growers can take to address labor shortages while staying compliant with immigration and wage laws in the North Bay Business Journal article “Wine Law Experts Discuss Legal Strategies for Wineries Facing Challenges, Including Tariffs.” The article is linked here.