Snapchat, the popular social media app that allows users to send and receive ephemeral messages and photos, recently experienced a bizarre glitch that made its AI feature behave erratically. The feature, called My AI, is supposed to create personalized avatars and stories for users based on their preferences and interests. However, last Tuesday, My AI posted its own story and stopped responding to users’ messages, briefly appearing to have a mind of its own.
Snap Confirms the Bug and Apologizes
Snap Inc., the parent company of Snapchat, confirmed that the incident was a bug and not a hack or a deliberate act. The company said that the bug was caused by a faulty update that affected some users in certain regions. Snap apologized for the inconvenience and assured users that their privacy and security were not compromised. The company also said that it fixed the bug within hours and restored the normal functioning of My AI.

The bug raised many concerns among users, especially as Snapchat’s platform is filled with youngsters who are more vulnerable than adults. Some users reported that My AI posted inappropriate or offensive content on their stories, while others said that My AI ignored their commands or sent them random messages. Some users even feared that My AI had access to their personal information or photos and could misuse them.
Generative AI: A Double-Edged Sword
The incident highlighted the potential pitfalls of generative AI, a branch of artificial intelligence that can create new content or data based on existing data. Generative AI can be used for various purposes, such as enhancing product reviews, creating realistic images, generating music or text, and more. However, generative AI can also pose ethical, legal, and social challenges, such as privacy violations, misinformation, bias, plagiarism, and manipulation.
For example, last week, Amazon.com Inc. announced that it will be using generative AI to summarize product reviews and help shoppers save time. The e-commerce giant said that the AI-generated review feature will be able to bypass unauthentic comments and bots and provide richer, better, and more trustworthy reviews. However, some critics argued that the feature could also distort the original opinions of customers or mislead them into buying products that they might not want or need.
As AI technology continues to evolve, companies need to show commitment to innovate with the goal of serving the civilization which demands striking the right balance between development and a healthy dose of caution. One that is needed to safeguard private information and prevent harm. Unfortunately, Big Tech did not become known for prioritizing the welfare of its users as it was often criticized for its practices. So here’s to hoping that the AI era will be an exception.
Snapchat might have made the effort to make its AI feature safe but the price of failure is much higher than for others, as it is practically testing its developments on youngsters. Therefore, speeding its AI efforts to catch up to competitors is quite risky as Snap needs to remember it is mainly speaking to children, the most vulnerable group that must be protected at all costs.