The disturbing images of Substack
In these troubling times, I chanced upon the generative AI tool embedded within Substack and was quite shocked by the emotive images it produced.
I have been busy researching the legal aspects of AI systems for my next major piece that aims to delve deeper into the threat they pose to human artists and creatives. With the rapid proliferation of AI applications, generative AI is becoming ubiquitous as software companies rush to embed the tech within their product offerings.
The image tool hidden in Substack
Only a couple of weeks ago, Mark Zuckerberg started rolling out AI tools on his Social Media Platforms, bringing these tools to millions of new users. Substack is already in this race, offering an AI image generation tool within its Post Editor. You might have seen it. You can access it on the toolbar when you edit a post. Click on the image icon and press generate:
Now I’m not quite sure how I feel about Substack getting on the AI bandwagon. Does this really help writers? Perhaps I’ll explore this topic in another post, but for now I thought I would at least give it a quick test run.
I did a few searches on the web but was unable to find out which AI model this tool is plugged into, nor any details of the data it is trained on. If anyone knows please leave a comment below. The tool gives you a simple option to write a prompt and choose a style for the image:
With my mind reeling from the disturbing events currently unfolding in the Middle East, I decided to test the tool by plucking a few trending words from the ether to enter as prompts. I chose a single style “B&W” highlighted above.
I ran one iteration for each prompt. Each run produced 4 images, out of which I chose the most striking one, displayed below. When I first tried out this tool a couple of weeks ago, I perhaps was too hasty to write it off as quite bad. It seemed to come nowhere near the words I tested it on. Now I’m not so sure. I’m a little bit puzzled why it seems to favor images of children in the tests I ran, maybe others will experience the tool differently. Only one prompt had the word “children” in it in the tests I ran.
Warning: you may wish to skip the rest of this post if you are already overwhelmed by the current events.
The images speak for themselves.
“Hope”
“War”
“Human Animals”
“Blind Hatred”
“Innocent Children”
“Terrorists”
“Stolen Lives”
“Peace”
Peace.
I am not expecting any one to like this post. I want to show how we can be moved by synthetic content. I clearly stated it was made with AI, none of the images depict real people. It is generated by algorithms. Many people are posting similar content as if it is real, to manipulate and inflame emotions. Do you think we have passed the time where we can trust anything we see online?
Hilariously, I gave it the prompt to generate a photo for me saying: “Arab guy in New York City” and it showed me a photo so hilarious I had to put it on my welcome email to new subscribers! It was wrong on so many levels. 😂😂
Weird hands, guy dressed somewhere between traditional Pakistani & Senegalese, and a 1950s Car in the background as if he was in Cuba.. AI is still stupid at understanding context..