Earlier today, the FBI shared two blurry photos on X of a person of interest in the shooting of right-wing activist Charlie Kirk. Numerous users replied with AI-upscaled, “enhanced” versions of the pictures almost immediately, turning the pixelated surveillance shots into sharp, high-resolution images. But AI tools aren’t uncovering secret details in a fuzzy picture, they’re inferring what might be there — and they have a track record of showing things that don’t exist.

Many AI-generated photo variations were posted under the original images, some apparently created with X’s own Grok bot, others with tools like ChatGPT. They vary in plausibility, though some are obviously off, like an “AI-based textual rendering” showing a clearly different shirt and Gigachad-level chin. The images are ostensibly supposed to help people find the person of interest, although they’re also eye-grabbing ways to get likes and reposts.

But it’s unlikely any of them are more helpful than the FBI’s photos. In past incidents, AI upscaling has done things like “depixelating” a low-resolution picture of President Barack Obama into a white man and adding a nonexistent lump to President Donald Trump’s head. It extrapolates from an existing image to fill in gaps, and while that can be useful under certain circumstances, you definitely shouldn’t treat it as hard evidence in a manhunt.

Here is the original post from the FBI, for reference:

And below are some examples of attempted “enhancements.”

Share.
Exit mobile version