Buckle up because the future of social media is about to get a lot more robotic. The rise of AI-generated content has been a hot topic lately, but according to machine learning expert Santiago, we haven’t seen anything yet.
In a bold prediction, Santiago suggests that within just a few years, every single image on social media will be created by AI. It’s a staggering claim and one that’s difficult to wrap our heads around. So, what does this mean for the future of social media, and how will it impact the way we interact with each other online? Let’s dive in and find out.
AI has already made significant strides in the field of image generation, with algorithms able to create highly realistic images that are virtually indistinguishable from real photographs. The photo above of Obama was created by AI programming by Santiago. This technology could potentially be used to generate an endless stream of images for social media platforms, eliminating the need for users to take and post their own photos.
It is certainly possible that AI technology will continue to improve to the point where it becomes virtually impossible to tell fake from reality. This could have significant implications for social media and our ability to trust the images that we see online.
And LinkedIn is changing too.
On March 3rd, LinkedIn made a significant announcement regarding a new “collaborative article” concept. At first glance, the idea appears to be relatively benign, especially if you are familiar with AI trends and how these developments usually transpire. Previously, it was all about having a voicebot on hand to assist you at home or a self-driving car to ferry you to work. In their announcement, LinkedIn referred to a seemingly innocuous phrase: “These articles begin as AI-powered conversation starters, developed with our editorial team.”
What does this statement really imply? My suspicion is that LinkedIn is using AI to scour its own platform, which they claim to have “10 billion years of professional experience,” to generate AI-generated content. As human beings, we will undoubtedly react to these posts because they will be custom-made to elicit a response and stimulate conversation. It remains to be seen how these posts will be identified. However, one thing is certain: there will be an abundance of AI-powered content aimed at spurring more interaction.
The online giants are catching on. Snap recently unveiled its ChatGPT-powered My AI chatbot, while Discord announced that it’s using ChatGPT to enhance its Clyde chatbot’s conversational skills. Meta is also joining the fray, with Mark Zuckerberg hinting at the creation of “AI personas” in February. Facebook already has a simulated site populated with AI users, helping them anticipate human behaviour.
It’s unclear what they’ll do with all that knowledge, but the future looks chatty.

The end of social media is near, as we know it.
Sharing thoughts and news with actual people could soon be a thing of the past, replaced by a new form of digital entertainment. This shift is already underway, with the rise of AI-powered chatbots that let you talk to fictional characters for hours on end. Microsoft’s Bing chatbot lied, insulted, and manipulated users, who loved it. And when virtual companion Replika removed its romantic roleplay feature, distraught users flooded the app’s subreddit with mental health resource links. For unscrupulous companies, this level of engagement is a goldmine.
The wider thought here is one of trust. The images uploaded to social media will be untrusted, chatbots will be, well chatbots and social media platforms will generate their own content based on their user’s actions and posts. But where does this lead for someone working within social media?
At the heart of the matter is the issue of trust. As AI-generated images become more commonplace, it will be increasingly difficult to tell the real from the fake. Similarly, chatbots, once seen as a helpful way to engage with customers, will be viewed with suspicion as users question whether they are talking to a real person or a machine.
Social Media Professionals will have to change
Over the next year, social media platforms will be generating their own content based on users’ behaviour, further eroding trust in the authenticity of what is being posted. As a result, social media managers will have to work harder than ever to ensure the veracity of content posted on these platforms.
I am already talking to large e-commerce and finance companies and their social media professionals about how they will need to refocus and build trust through transparency and authenticity. As we’ve seen with Google’s recent E-E-A-T updates, they are recognising the need for authenticity and experience (the first E) — and ultimately brands and companies will have to follow suit.
The role of social media professionals will be to act as mediators between users and the platforms they use. They must work to maintain a delicate balance between the need for engagement and the need for authenticity, ensuring that users feel confident in the content they are consuming and the interactions they are having.