Could AI transparency backfire for businesses?

5 days ago 5

On 11 November, astatine London's JournalismAI Festival 2025, Liz Lohn, Financial Times manager of merchandise and AI, told an assemblage that AI disclosure presented a situation for the company’s trusted, premium brand.

“People wage a lot, and whenever they spot an AI disclaimer, that erodes spot and creates the feeling that AI successful journalism equals cheap,” says Lohn. In fact, Lohn admits that determination person been instances of readers cancelling their subscription citing AI disclosure arsenic the reason.

Walking this delicate way of AI disclosure is not unsocial to the FT. But the institution serves arsenic a bellwether for hard decisions astir AI transparency that each companies are starting to face. And the FT’s occurrence astatine weathering successive exertion transformations makes its attack peculiarly noteworthy.

The FT is being precise careful, explains Lohn, astir the mode of disclosure. “If the contented has been done quality review, bash we truly request to enactment successful the disclaimer that AI astatine immoderate constituent has been utilized successful the process, oregon is the work connected the connected the journalist, connected the FT?” says Lohn, adding: “because the disclaimers bash truly erode trust.”

Even erstwhile determination is nary quality successful the loop, contented needs to beryllium viewed arsenic trustworthy. “Again, we're trying to beryllium precise cautious with it,” admits Lohn. Both instances request to support a precise precocious prime of output, truthful that the engagement uplift tin antagonistic the antagonistic interaction of disclosure and mislaid users who cull the marque arsenic a effect of disclosure.

The FT's real-world acquisition runs antagonistic to the received contented connected AI transparency. Widespread AI adoption implicit the past 3 years has fixed emergence to an accompanying and burgeoning country of AI ethics. Within this discipline, AI transparency has been a cornerstone of gathering idiosyncratic trust. The improvement of this attack includes the thought of ‘radical transparency’, a operation popularised successful 2017, good earlier the AI revolution, by hedge money Bridgewater Associates founder, Ray Dalio. The attack advocates unfastened connection wrong businesses, which successful signifier means sharing institution information, information and processes with employees and, to immoderate extent, the wide public. This, says Dalio, is the astir nonstop way to gathering trust.

As concern grapple with the moral, philosophical and regulatory quandaries presented by implementing AI into their mundane enactment processes and products, the ethical AI question has reinforced this thought of transparency arsenic the mode towards greater spot successful the exertion and the brands utilizing it. But this whitethorn each change.

Read Entire Article