Industry Technologies
How AI Is Impacting Legal Issues and Priorities In The Fashion Industry
by Dyan Finguerra-DuCharme, Partner, and Laure Sawaya, Of Counsel, Pryor Cashman LLP
The fashion and textile industries, long known for their emphasis on creativity and innovation, are undergoing a transformative shift due to advancements in artificial intelligence (AI). This technology is not only enhancing design, production, sustainability efforts, manufacturing processes, and consumer experiences, but also reshaping the legal landscape, particularly as it relates to intellectual property protection and enforcement, design development and protection, the ramifications of deep fakes on influencer and celebrity endorsements and collaborations, and how to prepare for all of the above in standard industry contracts going forward.
This article delves into these key areas, examining how AI is impacting the fashion industry and how legal teams can prepare for these shifts.
IP Rights in AI-Generated WorksAI’s role in generating new designs and textiles is growing. Algorithms can analyze vast datasets of fashion and textile trends and efficiency, historical concerns and demand, and consumer preferences to create innovative new products based on objective data as to market need and feasibility. However, this raises complex questions about IP rights. Traditionally, in the U.S., designers and/or brand owners can obtain various forms of IP rights in their designs, logos, graphics, photographs, brand names (among other things), innovative textiles and/or prints, including trademarks, trade dress, patent, and copyright protections. In the context of AI-generated creations and inventions, whether IP protection is available differs depending on discipline.
In particular, the Copyright Act only protects “original works of authorship,” which has been interpreted to mean works created by humans. For patents, AI systems may not be listed as inventors, but the use of an AI system by a human will not preclude that person from qualifying as an inventor if they “significantly contributed” to the claimed invention. In contrast, AI-created trademarks may be protected because trademark rights arise from use in commerce, not upon its creation or invention.
In particular, the Copyright Act only protects “original works of authorship,” which has been interpreted to mean works created by humans. For patents, AI systems may not be listed as inventors, but the use of an AI system by a human will not preclude that person from qualifying as an inventor if they “significantly contributed” to the claimed invention. In contrast, AI-created trademarks may be protected because trademark rights arise from use in commerce, not upon its creation or invention.
AI’s role in generating new designs and textiles is growing. Algorithms can analyze vast datasets of fashion and textile trends and efficiency, historical concerns and demand and consumer preferences to create innovative new products based on objective data as to market need and feasibility.
To protect their AI-generated designs, human designers should be involved in the final stages of creation, ensuring that the human element is present to claim inventorship. The same is true in the context of developing new textiles that would otherwise meet the criteria necessary to rise to the level of a patent-protectable invention and/or a creation that is sufficiently original to be entitled to copyright protection. Companies must continue to include work-for-hire clauses in employee handbooks and consulting agreements to ensure that all rights vest directly in the company from the date of creation/invention. Because of the issues around protectability of AI-generated works, individual creators should expressly represent and warrant either that they will not use AI in the creation of any of the deliverables and/or that it will do so at the direction and with the guidance of the company to ensure that the company can dictate the appropriate level of human involvement to be applied to allow the same to (potentially) still be considered a work of authorship and/or inventorship within the meaning of then-applicable law.
AI in the Context of Trade Dress Development and ProtectionTrade dress refers to the visual elements of a product, product or textile’s design, or its packaging, which element is nonfunctional and, for design elements (whether on a finished garment or a textile), has achieved consumer recognition as being uniquely used and associated with one manufacturer thereof (i.e. a print on a fabric that is so recognizable that it comes to be seen as an indication itself of the source of the relevant fabric or product)Brands are likely to use AI to develop new prints, patterns, and styles (which could potentially become a trade dress)by analyzing consumer preferences, market trends, and competitive products. This data-driven approach allows brands to create unique and appealing elements that resonate with consumers and could rise to the level of being a trade dress. When seeking trade dress protection, or to prove and enforce its rights therein, AI tools can generate information to demonstrate that the design or print has achieved secondary meaning.
With all the above-referenced beneficial aspects of AI, the use of AI in trade dress development also raises legal and ethical questions. For example, if AI generates trade dress elements based on existing designs, graphics, prints or patterns, it may inadvertently lead to similarities that could result in trade dress infringement claims. Ensuring that AI-generated trade dress is genuinely distinctive and non-infringing requires careful oversight and collaboration between designers and legal experts.
On the coinflip side, many fast-fashion brands are purposefully using AI to scour social media to identify fashion trends and then quickly manufacture copycat products, prints, and fabric looks that are price-point accessible for young consumers, including clothing, handbags, and accessories.
Companies should invest in robust IP management systems that leverage AI to monitor and enforce IP rights globally. These systems can track unauthorized use of designs, identify potential infringements, and streamline the enforcement process, thereby safeguarding the creative assets of fashion brands.
The Rise of Deep Fakes and Their Impact on Talent Endorsements and Brand CollaborationsGiven that talent endorsements and celebrity collaborations are valuable forms of marketing in fashion, the talent’s reputation is of paramount concern to fashion brands contracting with them. A rising concern is the potential for the rapid dissemination of believable misinformation and/or past truthful information about a brand that can cause enormous reputational harm. Even before the rising threat of AI, reputational due diligence and contractual protections were critical when entering into talent agreements, with the brand needing to research and vet the talent’s background and reputation. This process continues throughout the duration of the relationship to guard against reputational harm, which requires vigilance and close communication with the talent’s team to navigate fraught situations and potential misinformation.Talent agreements typically include a “Morals Clause” or “Conduct Clause.” These clauses provide an express remedy against talent whose conduct, statements, and/or associations adversely affect or are likely to adversely affect the public’s perception of that talent, and therefore any project, brand, or company with whom the talent is associated.
While it will remain fine to have officers and managers submitting reviews, the relationship must be clearly and conspicuously disclosed.
The growing sophistication and ubiquity of AI technology complicate the traditional process of reputation management, particularly with the recent advent of deep fakes. AI-created deep fakes manipulate images and videos to create realistic but fabricated content, often featuring a near-identical replica of a specific person or product. In the fashion industry, deep fakes have significant implications, particularly in marketing, endorsements, and consumer trust. While deep fakes can be used creatively, such as for virtual fashion shows or personalized marketing (only done lawfully with the consent of the talent), unauthorized deep fakes pose risks related to authenticity and brand integrity. Because the bad actor is not actually the talent, contracting around the relevant behavior (as is traditionally done using a Morals Clause) has become more complicated. Put simply, deep fakes of talent raise questions as to what constitutes “conduct” to trigger the Morals Clause.
Fashion companies should begin thinking more expansively when drafting a Morals Clause about what events should be anticipated. Because reputational harm is the cornerstone of the Morals Clause, and AI can create such harm without any actual conduct of the talent, fashion brands are likely to push for wider-reaching Morals Clauses that will cover public accusations and perceptions of misconduct as opposed to mere misconduct. Brands should expand the timeframe during which the accusations or alleged conduct took place because AI deep fakes can be made to appear to have taken place long ago, but the fabricated “conduct” is allegedly “discovered” only today.
Similarly, the power of AI as a search tool might also change how easily one might uncover truthful past misconduct otherwise buried on the internet, which could help companies better vet the talent. With that said, fashion companies should also be prepared for talent to push back on these requests and seek an express carve-out to their liability for deep fakes.
Regardless of how the liability is shared for AI-generated “misconduct,” talent agreements should include forward-looking remedial measures to mitigate the damage caused by the relevant misinformation. Companies should expressly require talent to take reasonable remedial measures to correct the misinformation via social media channels and through other means. Such measures might include an obligation for talent to actively engage in public discussions about the misinformation and that they cooperate with the brand in undertaking remedial campaigns. The question becomes at whose cost should such services be performed? Unlike a typical breach of contract, which places the burden and cost of “curing” misconduct on the party who engaged in that behavior, the harm caused by a deep fake was not by either party—fairness may call for a shared burden on the company and talent.
Deep fakes can also be used to counterfeit digital products or images, making it difficult for consumers to distinguish between genuine and fake items as shown online in a digital format. For fake product offerings, brands should invest in technologies that can detect and prevent deep fakes, such as digital watermarking and blockchain-based authentication systems. The fashion industry should create industry-wide standards to combat the spread of deep fakes and protect consumer trust.
TakeawaysAI has changed the way we operate and create. Brands need to understand how employees are relying upon AI to develop new designs, fabrics, and other innovations, and then carefully navigate which IP means are available for protection. Influencers and celebrities need to be keenly aware of how their name, image, and likeness are used on the web and take swift measures to protect their reputations. We are working with clients, on both the brand side and the talent side, to make sure agreements include provisions to anticipate AI-generated issues, as well as guiding our clients through measures to ensure that their creations can be protected and their rights globally enforced.
Regardless of how the liability is shared for AI-generated “misconduct,” talent agreements should include forward-looking remedial measures to mitigate the damage caused by the relevant misinformation. Companies should expressly require talent to take reasonable remedial measures to correct the misinformation via social media channels and through other means. Such measures might include an obligation for talent to actively engage in public discussions about the misinformation and that they cooperate with the brand in undertaking remedial campaigns. The question becomes at whose cost should such services be performed? Unlike a typical breach of contract, which places the burden and cost of “curing” misconduct on the party who engaged in that behavior, the harm caused by a deep fake was not by either party—fairness may call for a shared burden on the company and talent.
Deep fakes can also be used to counterfeit digital products or images, making it difficult for consumers to distinguish between genuine and fake items as shown online in a digital format. For fake product offerings, brands should invest in technologies that can detect and prevent deep fakes, such as digital watermarking and blockchain-based authentication systems. The fashion industry should create industry-wide standards to combat the spread of deep fakes and protect consumer trust.
TakeawaysAI has changed the way we operate and create. Brands need to understand how employees are relying upon AI to develop new designs, fabrics, and other innovations, and then carefully navigate which IP means are available for protection. Influencers and celebrities need to be keenly aware of how their name, image, and likeness are used on the web and take swift measures to protect their reputations. We are working with clients, on both the brand side and the talent side, to make sure agreements include provisions to anticipate AI-generated issues, as well as guiding our clients through measures to ensure that their creations can be protected and their rights globally enforced.