DALIM SOFTWARE Blog

How to Preserve Visual Trust in an AI-Driven Marketing World

Written by Pauline Przyrowski Sadova | Jun 18, 2025 9:09:53 AM

The Explosion of AI-Generated Content 

Would you trust a brand whose entire visual communication is created by artificial intelligence? What would you think if you were captivated by a new model in a perfume advertisement, only to later find out that this person doesn’t exist? Would your perception change if the brand failed to disclose the use of AI in creating these visuals? 

These are pressing questions for today’s marketers. 

Several studies indicate that failing to disclose the use of AI in advertising significantly undermines perceived brand authenticity, an essential factor for building brand trust. According to a recent study by Santa Cruz Software, over half of consumers would avoid purchasing a product if its design appears fake or inconsistent. In contrast, banner ads that include authenticity signals significantly enhance consumer trust.

Another 2024 study further reveals that when consumers believe emotional marketing content is AI-generated rather than human-created, it leads to a reduction in both positive word of mouth and customer loyalty. 

Given these challenges, how can marketers uphold brand authenticity and build trust?

Why Image Authenticity Is Now a Marketing Priority

The inspiration for this article came from Paul Melcher’s compelling presentation at our DUO 25 conference. He addressed a critical issue: the difficulty in distinguishing synthetic from real images. This challenge not only affects marketing but poses serious implications for journalism, ethics, and consumer perception. 

While this blog focuses on marketing, it’s clear that ensuring image authenticity is crucial for maintaining customer loyalty. So how can marketers demonstrate visual authenticity in a world dominated by generative AI? And how can Digital Asset Management (DAM) systems help in this mission? 

Combating Inauthentic Visuals: The Role of Attribution 

What can we do to counter the erosion of trust caused by AI-manipulated visuals, even from our favorite brands? One key solution lies in content attribution. 

As explained in the Content Authenticity Initiative White Paper "Setting the Standard for Digital Content Attribution", detection can only reactively identify deceptive content. In contrast, attribution proactively adds transparency, helping consumers make informed decisions. Content attribution exposes who modified a piece of content and what was changed, thereby empowering consumers and enhancing online trust

The Three Pillars of Authenticity: Truth, Trust, and Metadata 

According to Paul Melcher, the best defense against synthetic content lies in three core principles: truth, trust, and authenticity. To uphold these values, metadata plays a central role. 

As defined by IBM, "metadata is information—such as author, creation date or file size—that describes a data point or data set." In the context of images, metadata can include camera settings, editing history, timestamps, and geolocation data. 

Metadata not only helps verify the provenance and authenticity of an image but also gives marketers and consumers a tool to judge whether the content should be trusted. 

Industry Initiatives Supporting Image Authenticity 

Several initiatives have already been launched to address this challenge: 

  • Project Origin (2019): Focuses on securing trust in media. 
  • Content Authenticity Initiative (CAI): Launched in 2020 to set standards for digital content attribution. 
  • Coalition for Content Provenance and Authenticity (C2PA): Created under the Joint Development Foundation, combining efforts from CAI and Project Origin. 

For developers and content creators, the C2PA specifications and CAI open-source SDK (Software Development Kit) offer tools to create, verify, and display content credentials. 

However, limitations exist. Not all tools currently support C2PA, and embedded metadata can inadvertently expose sensitive information if not carefully managed. The use of these tools is currently optional and not yet backed by legislation. Organizations have the discretion to decide whether or not to adopt them.

AI Act and Image Attribution: The EU’s Push for Transparent Synthetic Media

While the EU’s Digital Product Passport focuses on improving access to product information, it does not yet directly address image authenticity. However, other legislative efforts (such as the Digital Services Act, Code of Practice on Disinformation, and especially the AI Act) are shaping a regulatory framework that supports transparency and ethical use of artificial intelligence in content creation.

At DUO 24 United, CEPIC (Center of the Picture Industry), which advocates for fair innovation in the visual sector, introduced the implications of the EU AI Act for image integrity. Presented by Valérie Théveniaud-Violette, the session emphasized how legislative obligations can help enforce transparency and ethical standards in AI-generated visuals.

Key provisions from points 133–135 of the AI Act outline the urgent need for content attribution and technical marking of AI-generated or manipulated media and the use of watermarks, metadata, cryptographic signatures, fingerprints, and logging methods to verify the origin and authenticity of digital content.

The Act is supported by the European AI Office, responsible for overseeing implementation and promoting trustworthy AI across the Union.

From August 2025, General-purpose AI models will be expected to comply with transparency requirements. Until full standards are established (by August 2027 or later), a non-binding Code of Practice will serve as a transitional guideline. CEPIC, however, strongly opposes the current third draft of this Code, stating that it heavily favors AI developers at the expense of content creators and legal clarity (CEPIC statement).

To remain compliant, CEPIC recommends the use of IPTC metadata standards when labeling AI-generated synthetic media, an essential step for upholding transparency and protecting the rights of content creators.

Overcoming Metadata Limitations with Invisible Watermarking 

One critical limitation is that metadata can be lost during export or publishing. Paul Melcher highlights a promising solution: invisible watermarking

Invisible watermarks can certify the origin of images and videos even when metadata is stripped away. This technique supports source verification, reduces misinformation, and ensures compliance with C2PA standards, essential in today’s era of AI-generated visuals (Imatag). 

Conclusion: How DALIM ES Supports Image Authenticity 

The authenticity of visual content is no longer a niche concern: it’s central to brand integrity, consumer trust, and ethical marketing. In the AI age, brands must embrace tools such as metadata, content attribution, and invisible watermarking to uphold transparency and truth

For companies looking to implement these principles, DALIM ES offers an all-in-one DAM and workflow platform designed to help preserve visual authenticity: 

  • Metadata Management: DALIM ES lets you upload, embed, and manage metadata at every stage of the asset lifecycle. 
  • Authenticity Reporting: Extract and verify metadata to assess trust and provenance. 
  • Visual Workflows: Set up custom workflows to check image sources, detect inconsistencies, and report authenticity in real time. 

By incorporating these capabilities, marketers can ensure their visual communications remain transparent, compliant, and trustworthy, even in a world increasingly shaped by AI. 

Let's talk!