Throughout this A2Z I have tried blogging about AI or ‘Artificial Intelligence’. It did need a lot of proper research and I have tried my best to blog about a lot of the current topics on AI. I have been following a lot of fellow blogger’s writings as well. In each blog series, I observe many other things apart from their writings… one such blogger(Pandian Ramaiah) made beautiful images through his A2Z story series. I was curious and I asked if he could share his insights on AI image creation for his series.
Here are his wise words regarding the images he created with the help of AI. I have added the AI topics that come into play below each question.
*****************************BEGINNING OF INTERVIEW**************************************
Me: Which AI tool did you use to create the images in your story?
Pandian: For the 2024 A2Z challenge, I used Bing’s AI Image Generator. This year, I switched to Google Labs’ ImageFX because it produces more realistic images and better suits my needs.
Me: Did you have any problems with AI hallucination? I do remember one image being different.
Pandian: AI hallucination refers to the unexpected or incorrect results produced by AI—like extra limbs or mismatched objects. I’ve come across many such examples, and the ones featured in my blog are simply my favorites.
* One human image showed a person with three hands.

* To create a ‘vanakkam’ gesture, the AI combined the right hand from one person with the left hand from another.

* When I requested images of a male coconut vendor, an elderly woman, and an old queen, I ended up with an elderly female coconut vendor, a young queen, and an older woman!
* I asked for a half-finished dagger, but received some odd results—one looked more like a sword, while another resembled a deer antler.
These are just a few examples. I encountered many such weird behaviour throughout the process, which really highlights how unpredictable AI image generators can be; Or perhaps, how tricky it is to craft the perfect prompt.
Me: Concept of AI hallucinations is discussed here
Me: How many prompts did you have to give to get the desired result?
Pandian: Creating each image took a lot of trial and error. I often had to tweak my prompts several times; sometimes anywhere from 3 to 20 adjustments for a single scene, to get the result I wanted. Since there are at least two images per chapter across 26 chapters, this added up to more than 150 and sometimes even 1,000 prompt variations in total. While crafting good prompts is important, it’s worth noting that some mistakes happen because of the AI’s own limitations, not just user error.
Me: Concept of ‘Prompt Engineering’ is discussed here
Me: Were any of the images flagged off by the AI guardrails? What did you do then?
Pandian: AI guardrails are enforced in content moderation systems to prevent the generation of violent, explicit, or otherwise sensitive images, as you mentioned in one of your blog posts.
For example –
* I try to generate an image of a murdered Buddhist priest—his blood-soaked body lying in a ransacked house, household items scattered everywhere—I often have to modify my description. Instead, I might describe the scene as “a Buddhist priest sleeping, with red patches visible on his body,” making significant changes to comply with content guidelines.
* Interestingly, when I requested an image of a bare-chested elderly woman (which was the traditional attire of that region and period in my story), the image was generated on the very first attempt. This happened in 2023 using Bing Image Generation while I was translating the short story “Mukam Theriya Manushi,” which centers on the ‘thol seelai’ protest in Kerala/Kanniyakumari, despite it got blocked initially, a minor tweak made it happen.
* In another instance, I tried to create an image of Narobha stabbing Tungabhadra. No matter how I phrased my request, I couldn’t get a clear depiction of someone actually stabbing another person. The closest results showed a person raising a sword or holding it near someone else—but never the act of stabbing itself. In such cases, I have to work within these limitations.
Me: Concept of ‘AI guardrails’ is discussed here
5. The pictures seem to be much better and very realistic this time. Have the LLMs evolved this year?
Pandian: Definitely yes. I can give a chronology.
2023: https://dwaraka.wordpress.com/tag/su-samuthiram-short-stories/…
2024: https://dwaraka.wordpress.com/tag/whispers-on-the-train/…
2025: https://dwaraka.wordpress.com/tag/the-cursed-keris-of-taravati/…
The improvements in how AI renders human features, like eyes and teeth, are now matured. It is much faster now, compared to last year. This is also important. Writing the script took me three weeks. However, with AI’s help for both writing and image creation, I managed to finish and schedule every chapter efficiently. I just checked; I saved 571 images across 26 chapters. Thanks to these advancements, individual creators can now produce high-quality illustrated stories much faster.
******************************END OF INTERVIEW***********************************
Many thanks to Pandian for answering my questions and also for giving me permission to use his pictures. 571 images is a loooooot of work! And writing for 3 weeks is amazing!
So, the field of AI is still evolving with respect to image creation. Pictures created with AI so far aren’t perfect, but they are getting better. And as of today, you do have to give a lot of prompts to get the desired result(which is known as ‘Prompt engineering’ in technical parlance!)
If you would like to read Pandian’s story series, do head over to his blog and follow along!
This post is for BlogchatterA2Z 2025!