The Analysis of “Dall-E 2” Digital Image Outcome on "Starry Night, Van Gogh" Prompt.

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active

Bayu Setiawan1, Pungky Febi Arifianto2, Damara Alif Pradipta3



The evolution of AI (artificial intelligence) is massive and limitless nowadays. Dall-E 2 was launched limitedly on 20 July 2022.  This is an interesting subject to pay attention to since the Dall-E 2 Developer claims that the program will give more realistic and aesthetic results than the older version. Dall-E 2 is a digital image generator software developed by Open AI. This program works by converting (generating) text called "prompt" into a digital image. Digital images generated by Dall-E2 claimed that have unlimited possibilities, where the results will be very different, and no single image will be the same. In this study. We did a limited analysis of the generated outcome imagery-specific prompts. This research is about visual analysis of "Starry Night" text prompts generated images outcome. The experimentation on 28 prompts indicates that the generated images had fine quality in image detail and accuracy. The image outcome is diverse in color, consistency, composition, object, and style. Dall-E 2 image outcome is great even it has limitation and flaw.

Keywords: AI generator, Dall-E 2, art, generative art, AI


Full Text



Rose, Gillian. (2007). Visual Methodologies: An Introduction to the Interpretation of Visual Materials. SAGE.

Ettinger, A. (2020). What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 34-48.

Marcus, G. F., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon

Monro, G. (2009). Emergence and Generative Art. Leonardo, 42(5), 476–477. DOI: 10.1162/leon.2009.42.5.476Press.

Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., . . . Sutskever, I. (2021). Learning transferable visual models from natural language supervision. [Proceeding]. International Conference on Machine Learning, (pp. 8748-8763).

Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical Text-Conditional Image Generation with CLIP Latents. arXiv preprint arXiv:2204.06125. Retrieved from

Rips, L. (1989). Similarity, typicality, and categorization. In S. Vosniadou, & A. Ortony, Similarity and Analogical Reasoning (pp. 21-59). Cambridge: Cambridge University Press.

Singh, Gautam. Fei Deng & Sungjin Ahn. (2022). Illiterate Dall-E Learns To Compose. Rutgers University. KAIST. [Proceeding]. Published as a conference paper at ICLR 2022

Swimmer963. (2022, May 1). What DALL-E 2 can and cannot do. Retrieved from LessWrong:

Thrush, T., Jiang, R., Bartolo, M., Singh, A., Williams, A., Kiela, D., & Ross, C. (2022). Winoground: Probing Vision and Language Models for Visio-Linguistic Compositionality. arXiv preprint arXiv:2204.03162. Retrieved from

Kim, Jina, Soyeon Shin, Kunwoo Bae, Soyoung Oh, Eunil Park, and Angel P. del Pobil. (2020). “Can AI Be a Content Generator? Effects of Content Generators and Information Delivery Methods on the Psychology of Content Consumers.” Telematics and Informatics 55 (December 2020): 101452.

Lee, Hye-Kyung. (2022). “Rethinking Creativity: Creative Industries, AI and Everyday Creativity.” Media, Culture & Society 44, no. 3 (March 7, 2022): 601–12.

Mazzone, Marian, and Ahmed Elgammal. (2019). “Art, Creativity, and the Potential of Artificial Intelligence.” Arts 8, no. 1 (February 21, 2019): 26.

Monro, Gordon. “Emergence and Generative Art.” Leonardo 42, no. 5 (October 2009): 476–77.