Random Image Display on Page Reload

Think Twice Before Creating That ChatGPT Action Figure

May 1, 2025 9:56 AM

Think Twice Before Creating That ChatGPT Action Figure

People are using ChatGPT’s new image generator to take part in viral social media trends. But using it also puts your privacy at risk—unless you take a few simple steps to protect yourself.

Image may contain Clothing Shorts Person Face Head Photography Portrait Figurine Helmet Accessories and Bracelet
Photograph: David Benito/Getty Images

At the start of April, an influx of action figures started appearing on social media sites including LinkedIn and X. Each figure depicted the person who had created it with uncanny accuracy, complete with personalized accessories such as reusable coffee cups, yoga mats, and headphones.

All this is possible because of OpenAI’s new GPT-4o-poweredimage generator, which supercharges ChatGPT’s ability to edit pictures, render text, and more. OpenAI’s ChatGPT image generator can also create pictures in the style of Japanese animated film company Studio Ghibli—a trend that quickly went viral, too.

The images are fun and easy to make—all you need is a free ChatGPT account and a photo. Yet to create an action figure or Studio Ghibli-style image, you also need to hand over a lot of data to OpenAI, which could be used to train its models.

Hidden Data

The data you are giving away when you use an AI image editor is often hidden. Every time you upload an image to ChatGPT, you’re potentially handing over “an entire bundle of metadata,” says Tom Vazdar, area chair for cybersecurity at Open Institute of Technology. “That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”

OpenAI also collects data about the device you’re using to access the platform. That means your device type, operating system, browser version, and unique identifiers, says Vazdar. “And because platforms like ChatGPT operate conversationally, there’s also behavioral data, such as what you typed, what kind of images you asked for, how you interacted with the interface and the frequency of those actions.”

It's not just your face. If you upload a high-resolution photo, you're giving OpenAI whatever else is in the image, too—the background, other people, things in your room and anything readable such as documents or badges, says Camden Woollven, group head of AI product marketing at risk management firm GRC International Group.

This type of voluntarily provided, consent-backed data is “a gold mine for training generative models,” especially multimodal ones that rely on visual inputs, says Vazdar.

OpenAI denies it is orchestrating viral photo trends as a ploy to collect user data, yet the firm certainly gains an advantage from it. OpenAI doesn’t need to scrape the web for your face if you’re happily uploading it yourself, Vazdar points out. “This trend, whether by design or a convenient opportunity, is providing the company with massive volumes of fresh, high-quality facial data from diverse age groups, ethnicities, and geographies.”

OpenAI says it does not actively seek out personal information to train models—and it doesn’t use public data on the internet to build profiles about people to advertise to them or sell their data, an OpenAI spokesperson tells WIRED. However, under OpenAI’s current privacy policy, images submitted through ChatGPT can be retained and used to improve its models.

Any data, prompts, or requests you share helps teach the algorithm—and personalized information helps fine tune it further, says Jake Moore, global cybersecurity adviser at security outfit ESET, who created his own action figure to demonstrate the privacy risks of the trend on LinkedIn.

Uncanny Likeness

In some markets, your photos are protected by regulation. In the UK and EU, data-protection regulation including the GDPR offer strong protections, including the right to access or delete your data. At the same time, use of biometric data requires explicit consent.

However, photographs become biometric data only when processed through a specific technical means allowing the unique identification of a specific individual, says Melissa Hall, senior associate at law firm MFMac. Processing an image to create a cartoon version of the subject in the original photograph is “unlikely to meet this definition,” she says.

Meanwhile, in the US, privacy protections vary. “California and Illinois are leading with stronger data protection laws, but there is no standard position across all US states,” says Annalisa Checchi, a partner at IP law firm Ionic Legal. And OpenAI’s privacy policy doesn’t contain an explicit carve-out for likeness or biometric data, which “creates a grey area for stylized facial uploads,” Checchi says.

The risks include your image or likeness being retained, potentially used to train future models, or combined with other data for profiling, says Checchi. “While these platforms often prioritize safety, the long-term use of your likeness is still poorly understood—and hard to retract once uploaded.”

OpenAI says its users’ privacy and security is a top priority. The firm wants its AI models to learn about the world, not private individuals, and it actively minimizes the collection of personal information, an OpenAI spokesperson tells WIRED.

Meanwhile, users have control over how their data is used, with self-service tools to access, export, or delete personal information. You can also opt out of having content used to improve models, according to OpenAI.

ChatGPT Free, Plus, and Pro users can control whether they contribute to future model improvements in their data controls settings. OpenAI does not train on ChatGPT Team, Enterprise, and Edu customer data⁠ by default, according to the company.

Trending Topics

The next time you are tempted to jump on a ChatGPT-led trend such as the action figure or Studio Ghibli–style images, it’s wise to consider the privacy trade-off. The risks apply to ChatGPT as well as many other AI image editing or generation tools, so it’s important to read the privacy policy before uploading your photos.

There are also steps you can take to protect your data. In ChatGPT, the most effective is to turn off chat history, which helps ensure your data is not used for training, says Vazdar. You can also upload anonymized or modified images, for example, using a filter or generating a digital avatar rather than an actual photo, he says.

It’s worth stripping out metadata from image files before uploading, which is possible using photo editing tools. “Users should avoid prompts that include sensitive personal information and refrain from uploading group photos or anything with identifiable background features,” says Vazdar.

Double-check your OpenAI account settings, especially those related to data use for training, Hall adds. “Be mindful of whether any third-party tools are involved, and never upload someone else’s photo without their consent. OpenAI’s terms make it clear that you’re responsible for what you upload, so awareness is key.”

Checchi recommends disabling model training in OpenAI’s settings, avoiding location-tagged prompts, and steering clear of linking content to social profiles. “Privacy and creativity aren’t mutually exclusive—you just need to be a bit more intentional.”

Read More

WhatsApp Is Walking a Tightrope Between AI Features and Privacy

WhatsApp's AI tools will use a new “Private Processing” system designed to allow cloud access without letting Meta or anyone else see end-to-end encrypted chats. But experts still see risks.
Lily Hay Newman

These Startups Are Building Advanced AI Models Without Data Centers

A new crowd-trained way to develop LLMs over the internet could shake up the AI industry with a giant 100 billion-parameter model later this year.
Will Knight

The Climate Crisis Threatens Supply Chains. Manufacturers Hope AI Can Help

The Covid-19 pandemic showed just how vulnerable global supply chains are. Climate shocks could pose an even greater risk.
Chris Baraniuk

New Jersey Sues Discord for Allegedly Failing to Protect Children

The New Jersey attorney general claims Discord’s features to keep children under 13 safe from sexual predators and harmful content are inadequate.
Justin Ling

Sam Altman's Eye-Scanning Orb Is Now Coming to the US

At a buzzy event in San Francisco, World announced a series of Apple-like stores, a partnership with dating giant Match Group, and a new mini gadget to scan your eyeballs.
Lauren Goode

Gmail’s New Encrypted Messages Feature Opens a Door for Scams

Google is rolling out an end-to-end encrypted email feature for business customers, but it could spawn phishing attacks, particularly in non-Gmail inboxes.
Lily Hay Newman

How to Use Edits, Instagram’s New CapCut Clone for Editing Videos

Instagram’s long-promised video-editing app is out now. Here’s how to use Edits to make Reels, and how the software differs from TikTok’s CapCut.
Reece Rogers

‘You Can’t Lick a Badger Twice’: Google Failures Highlight a Fundamental AI Flaw

Google’s AI Overviews feature credible-sounding explanations for completely made-up idioms.
Brian Barrett

AI Is Using Your Likes to Get Inside Your Head

Liking features on social media can provide troves of data about human behavior to AI models. But as AI gets smarter, will it be able to know users’ preferences before they do?
Martin Reeves

Bluesky's Blue Check Is Finally Here

Bluesky’s new verification process launches today. It mixes the old-school, Twitter-style blue check bestowed by the platform with a more decentralized option for trusted organizations.
Kate Knibbs

Mike Waltz Has Somehow Gotten Even Worse at Using Signal

A photo taken this week showed Mike Waltz using an app that looks like—but is not—Signal to communicate with top officials. "I don't even know where to start with this," says one expert.
Lily Hay Newman

Microsoft’s Recall AI Tool Is Making an Unwelcome Return

Microsoft held off on releasing the privacy-unfriendly feature after a swell of pushback last year. Now it’s trying again, with a few improvements that skeptics say still aren't enough.
Dan Goodin, Ars Technica

*****
Credit belongs to : www.wired.com

Check Also

Second dead grey whale in less than a week washes ashore in B.C.

A second dead grey whale has washed ashore in British Columbia in less than a …