Esquire Singapore Interviewed Zora’s Actor…’s AI Facsimile

Mackenyu as Roronora Zora in Neftlix's One Piece

The headline isn’t hyperbole, so fuck me, I guess. What are we even doing here?

Full story, which I first saw shared by Kotaku: On March 6, Esquire Singapore published an interview conducted by Joy Ling of Netflix’s live-action One Piece’s Roronoa Zoro actor Mackenyu’s… LLM AI-generated doppelganger. And the only reason didn’t fill that last sentence with sarcasm quotes is that I’m really trying to use them less, so I’m quitting cold turkey.

The interview says, right above where the AI back-and-forth begins and after several paragraphs of both interesting facts (huh, Mackenyu’s Sonny Chiba’s son. That’s neat.) and incredibly florid prose:

“The following interview was produced with Claude, Copilot, and edited by humans.”

My first assumption was that this was a pitch from Netflix or Mackenyu’s own rep, using an ill-advised AI version of the star to get attention. Shady, but not that surprising considering some of the pitches I’ve gotten. No, this was an active decision by Esquire Singapore, which was apparently facing a deadline and interview plans with the actual Mackenyu fell through:

“We had the photospread, but nothing directly uttered by the 29-year-old. With a driving need for a feature, we had to be inventive. Harnessing our creative license, we pulled his verbatim from previous interviews and fed them through an AI programme to formulate new responses.”

Turning to an LLM and giving it material from other interviews isn’t really being inventive, in a way that’s remotely responsible. At best it’s cutting up and rearranging sentences from those other interviews and trying to pass them off as new information. At worst, and most likely considering how these AIs work, it’s taking completely made-up statements as publishing them as if they were what Mackenyu would have said.

Speaking as a journalist here (well, not here, because here I’m a ranting aggregator with no editor who swears too much, but in terms of my day job and formal training), passing off this story as anything that could be considered remotely similar to an actual interview with the subject is wild journalistic malpractice. LLMs are not thinking machines, they are response engines that take the data they’re fed and use that to determine responses to queries that will most likely be accepted. Even if they did think and process information like we do, they can’t emulate people because there is no data set complete enough to do that. Interviews, biographies, whatever information is available on a given person is a sliver of who they are. LLMs don’t know a person’s secrets. They don’t know a person’s feelings. They don’t know how many masks are being worn and when, and what they cover. They only know what they’re fed, and that’s almost always solely public-facing information.

If you’re hitting a print deadline and aren’t getting what you need to fill pages, don’t turn to AI at all, but especially don’t turn to AI for some mockery of an interview with an actual person. Put together an art spread. Write some fluff. Dust off some evergreen material you haven’t published yet. But don’t do… this.

Leave a Reply