[ad_1]
Meta has again tripped up in its rush to implement moderation at scale. This time with the victims being photographers posting real photos on Instagram.
In recent days, a number of photographers have noticed that their authentic work is being labeled as “Made With AI” after being uploaded by them to Instagram.
This has been applied even to images that are genuine shots with minimal post-production editing through tools like Photoshop.
These AI labels then stain the reputation of the photographers they’re being applied to by creating an impression of the images being entirely fake works generated through tools like Midjourney or Adobe Firefly.
One recent example of this happening is photographer Eric Paré, who put serious effort into creating stunningly fantastical but completely real photos of a model posing on the Uyuni Salt Flats of Bolivia.
He then uploaded a collection of these images to Instagram only to have the social platform’s AI detector slap them with a “Made With AI” label.
This happened despite a short video in the IG post sequence showing him physically setting up his shoot, and a photo of his camera’s LCD screen displaying his genuine creation being captured.
A key irony here is that the AI detector being used by Meta for detecting AI images in Instagram posts is itself an AI algorithm (of a different sort) and obviously, it’s screwing up its job, probably through a lack of human moderation.
The shame in that is that genuine photographers, who work damn hard to create vividly fantastical artistic shots within the physical world, are being mislabeled as if they were little more than users of some split-second AI rendering program.
Pare explained to Petapixel that he did put his images through an AI denoise program, but this is a far cry from anything resembling image creation with AI.
Further tests of Meta/Instagram’s AI detection algorithm conducted by Petapixel seem to demonstrate that even very tiny image modifications using tools like Photoshop’s generative fill, or other AI mechanisms triggered the AI detector.
However, images first copied and pasted into new documents before being uploaded, or images posted as screenshots, wouldn’t trigger the Meta AI detector.
In other words, the new Meta tool is likely picking up Content Credentials metadata embedded into any image edited through Adobe’s tools and simply stamping on “Made With AI” with zero regard for degree and nuance.
This of course would be a very typically ham-fisted screwup from Meta and not at all surprising.
If true, it’s a new spin on many, many previous incidents in which the company’s algorithmic content moderation mechanisms work in the most rigid way possible to the point of sliding into absurdity.
Previous instances would include things like the social media platform’s algorithmic nudity controls causing it to flag photos of antique sculptures with bans.
Essentially, it’s a demonstration of how algorithmic and AI tools can simultaneously be both useful and useless in two interacting contexts.
In the case of these new AI labels being applied to real photos, the Meta algorithm again becomes far too reductive, and also arbitrary.
Thus, though a photographer puts enormous effort into setting up a perfectly real composition, a single use of specific AI tools that leave certain specific metadata immediately gets their work flagged with no regard for context or nuance.
Meta itself states that there are two reasons why its platforms will apply the “Made With AI” label
The first:
“When Meta detects AI use
Any content that contains industry-standard signals that it’s generated by AI will be labeled as “Made with AI.” This includes content that is created or edited using third-party AI tools. It also includes content that is created using Meta’s AI tools, downloaded and then uploaded to Facebook, Instagram and Threads.”
And the second:
“When people identify their AI-generated content
People using Meta products are also able, and in some cases required, to label their content as “Made with AI” when they share AI-generated content or content that was altered using AI.”
Both of the above leave lots of room for ambiguity. What’s more, the company’s page on how AI-generated content is identified also notes that not all images with AI-generated content will be labeled.
Their criteria for applying this, or not, does seem a bit odd though:
On the one hand work like Pare’s gets labeled as an AI creation, while on the other Meta has also opened the floodgates to an absolute avalanche of paid, AI-generated sludge content and spam on Instagram.
These monetized posts are then frequently not labeled as “Made With AI” despite being pushed onto IG and Facebook users by the platform’s own “engagement” algorithms to the point of crowding out even organic posts from the artists that users personally follow.
It’s a rather curious strategy by Meta for protecting viewers from AI fakes.
Imaged credit: Eric Paré
[ad_2]