What begins as a sweet memory or a fun photo edit could one day be the reason your privacy, identity, and even future security are at risk. Experts warn that every image uploaded to artificial intelligence platforms may end up fuelling powerful algorithms, creating deepfakes, or being sold on the Dark Web.
In recent weeks, Sri Lanka’s cyberspace has been swept by a wave of AI-generated photos. From emotional tributes of children recreating moments with parents who have passed away, to men posing with roses and women dressed in elegant saris, the phenomenon has spread so quickly that it is nearly impossible to find someone who has not experimented with artificial intelligence photo tools. A touching post from a young woman thanking Google Gemini for generating a picture of her with her late father in heaven captured the emotional core of this movement. The photo appeared so realistic that many could not believe it was artificially created. Yet, behind the emotional appeal and visual creativity lies a darker reality, one that cybersecurity experts say carries enormous risks for privacy, identity, and even national security.
Artificial intelligence tools like Google Gemini, DeepSeek, and ChatGPT operate on massive training datasets. Every text, image, or video fed into these systems may be used to improve their performance. While companies have different policies and some platforms give users limited control over data sharing, cybersecurity experts warn that most users overlook the terms and conditions. In many cases, the personal images that people upload for entertainment or remembrance are being quietly used for AI model training. According to cybersecurity analyst Asela Waidyalankara, uploading family photos or deeply personal images to AI platforms effectively transfers ownership of those sensitive moments to algorithms that learn from them. Once uploaded, the images may no longer belong to the user in any meaningful way, and the data can be stored, repurposed, or processed across multiple global data centers without disclosure.
Artificial intelligence model training itself can be described as teaching computer systems to analyze enormous volumes of information to generate better predictions and outputs. The more data it consumes, the more accurate and lifelike the generated content becomes. This is why the recent viral trends, such as the Ghibli-style edits and now realistic photo recreations, are so appealing to companies, they are not just serving the users but also gathering fresh data to train their systems. However, what users fail to realize is that their personal data fuels this ecosystem.
The risks go far beyond mere training. Experts warn that uploaded photos can be manipulated in dangerous ways. Information security engineer Thinuri Tissera, from the Sri Lanka Computer Emergency Response Forum, highlights that many of the reported cases her organization receives are not from people uploading their own photos but from others misusing them. Individuals often take publicly available photos from social media, feed them into AI platforms, and generate nude or pornographic deepfakes. These fabricated images are then used for harassment, extortion, or impersonation. What begins as an innocent profile picture could, in the wrong hands, become a tool for cybercrime.
This growing trend also intersects with criminal exploitation. AI is already being used to create pornographic material, sometimes even cloning real people into explicit scenarios. Asela Waidyalankara warns that this not only damages reputations but also puts victims at the mercy of extortion schemes. In some cases, victims are blackmailed into paying large sums to prevent the distribution of manipulated content. Others are coerced into fulfilling demands under threat of exposure. Worse still, AI-driven impersonation can be used to bypass security systems, such as phones that rely on Face ID, or even create fraudulent passports and national IDs. If the stolen or manipulated photos are sold on the Dark Web, they can fuel global cybercrime operations.
Another overlooked risk lies in metadata. Every digital photo contains hidden information such as the device used to capture it, the date, and the exact geolocation. Platforms that analyze or process these photos can extract this metadata, inadvertently exposing the user’s personal environment. Waidyalankara notes that a harmless-looking background detail in a photo could reveal the exact location it was taken, posing a security risk for individuals and families.
While some argue that the chances of AI databases being hacked are slim, cybersecurity experts caution that the possibility cannot be dismissed. Hackers worldwide are constantly attempting to breach data centers. If successful, the consequences could be catastrophic. Bulk datasets of personal photos could be stolen, leaked, or traded. Information security experts warn that if such data falls into the wrong hands, it can be resold across cybercriminal markets or weaponized for identity theft, fraud, or espionage.
The danger is not limited to individuals. On a broader level, experts warn that as AI systems gather vast amounts of data, they could one day combine these inputs with robotics and advanced behavioral analysis. Imagine AI systems that, using photos, chat logs, and metadata, can reconstruct realistic digital or robotic clones of real individuals. Tissera notes that with the current trajectory of technological advancement, such scenarios are no longer science fiction but a tangible risk. A future where AI-powered doppelgängers are used to impersonate real people in personal, professional, or even political settings is within reach.
Even today, the realism of AI-generated videos and photos contributes to the proliferation of fake news. Fabricated clips featuring world leaders, celebrities, or ordinary citizens spread rapidly across social media, blurring the line between truth and fiction. The more realistic these creations become, the harder it will be for people to distinguish fact from fabrication.
So how can users protect themselves? Experts emphasize awareness and caution. While AI tools offer incredible creative possibilities, users must understand the trade-offs. Thinuri Tissera notes that although her organization has not yet received reports of data leaks from Sri Lankan users, the risk remains high. The best strategy is to assume that once uploaded, personal data is no longer entirely safe. Free AI tools, in particular, pose a higher risk because users cannot always know where and how their data is stored or processed. Paid platforms often provide stronger guarantees about user privacy, but even those are not foolproof.
The advice from cybersecurity professionals is simple but sobering: think carefully before uploading personal photos to any AI platform. Expect that what you share may one day resurface in contexts you never imagined, whether as training data, manipulated deepfakes, or part of a hacked dataset. Asela Waidyalankara cautions that living in a digital world means accepting a level of risk, but responsible choices can minimize exposure. Avoid uploading sensitive family photos, intimate images, or anything that could be misused. Recognize that every click is a potential trade of privacy for convenience.
As AI becomes more integrated into daily life, the balance between innovation and security grows increasingly fragile. While these tools empower creativity, connection, and nostalgia, they also expose users to unprecedented vulnerabilities. The wave of AI-generated photos sweeping social media today may seem harmless or even heartwarming, but it also feeds a system that thrives on personal data and risks turning it into a weapon against the very people who created it.
Ultimately, the story of AI photo trends is a cautionary tale. Every touching memory or playful edit created with artificial intelligence comes with hidden costs. The future may see AI systems capable of reconstructing lifelike versions of us, impersonating our identities, and reshaping reality itself. Until stronger regulations, safeguards, and ethical frameworks are put in place, the responsibility falls on users to act wisely. As experts warn, it is not a question of if misuse will happen but when.
