AI and the Art of Storytelling: How to be Smart Users of AI
AI. Two little letters that have big implications for humanity. Or is that a tad dramatic? Isn’t Artificial Intelligence (AI) just another technological advancement that we will all eventually accept, like the telephone or the computer? Autocorrect and word processing editors have done away with the need to memorize spelling, grammar, and syntax rules. Cloud storage has been using resources like land, water, and fossil fuels for decades. Social media apps have already been gathering, saving, using, and selling our data. Why do people seem to care so much over generative AI?
Perhaps AI has been a digital tipping point of sorts – the speed at which it showed up and then stayed, the possibilities of it, the unknown bounds of its capabilities. But we also see AI everywhere around us. It shows up in ads, shows and movies, social media, workplace processes, k-12 schools, and higher education. A simple Google search is now filtered through Gemini’s overview and Microsoft’s Copilot can help you master your squat form.
In this blog post, I encourage a critical middle ground stance on generative AI (If you missed part one of this series, read that first). This matters for those of us working with communities of storytellers because we often work, and share stories, in digital spaces. We also might come across “stories” that don’t seem to be written by or visually portraying humans. While this does raise concern over which information can be believed in digital spaces, it doesn’t mean that the only way forward is to demonize AI. The focus of AI conversation shouldn’t be on banning all uses of generative AI or on trying to shame people when they use it; the focus should be on continual development of digital literacy and critical thinking skills so that we, humans, can be thoughtful users of AI.
Digital Storytelling and AI
Stories are all over social media. People write their personal recommendations on products and services, livestream their firsthand accounts of events, and recount their lived experiences through vlogs. Influencers have leaned into authenticity (perhaps with good intentions) as a performative method to gain followers, promote products, and make money. Whether genuine or not, this ends up diluting the power of stories. How can you trust that a story recounted on social media really happened?
The capabilities of Generative AI to generate written as well as visual content only intensifies feelings of distrust about digital stories as valid sources of information. As you scroll through platforms like LinkedIn, Facebook, or TikTok, you might find yourself questioning whether the content you are reading or watching is real. Maybe, you’ve come across posts that are clearly AI-generated due to incorrect visuals or mismatched written content. If an account you trust posts something, you might be more likely to believe a story is true. But how do you know that it is? If a story isn’t “real,” can it still be true? Can it still be impactful?
An Example. Even before the recent hype of generative AI, Feeding America used AI to generate a representative of child hunger in the U.S for their Hunger in America campaign in 2020. The ad has been recirculating in shortened form in 2025, likely due to the popularity of AI. The generated child says, “I am what child hunger looks like in America.” The portrayal signals a female voice and body that is fair-skinned and around 12-14 years old. The voice changes throughout the longer version as different experiences of child hunger are recounted, but the shortened ad now circulating keeps the same voice throughout and does not mention multiple experiences.
Screenshot of Feeding America’s ad, which shows an AI-generated girl looking into the camera. She is wearing a plaid shirt and zip up sweatshirt and has brown eyes, fair skin, and straight brown hair.
Is this AI-generated story true? In some sense, yes. It is supposedly generated using data to create a representative of child hunger in America. And, there might be some benefits of using AI to tell this story. It can protect the personal identities of minors and those from marginalized communities. It allows for advocacy without shame or loss of dignity. It also seems to suggest a conglomerate—a diversity of identities represented through this one characterization. On the other hand, there might be some drawbacks. Which data were used to generate this characterization of child hunger in America? Does it give the most accurate depiction? Visually, only seeing one specific depiction of a child tells viewers to focus on one story, contrary to what is being verbally communicated. Whether intentional or not, we lose important distinctions that exist among those who experience child hunger in America when we circulate a stock story representing a complex issue.
And so, can AI-generated stories be impactful? Maybe. But if what we as humans value about storytelling is the unique perspectives offered by storytellers about their experiences of living in this world, then we might need more than what AI has to offer; we might need to sharpen our digital literacy skills and consider the ethical complications of engaging with AI-generated content.
Ethical Complications
To be thoughtful users of AI, there are many ethical complications to grapple with. I offer three here: considerations of data privacy, critical thinking, and authorship.
Image of Sam Altman, CEO of OpenAI, with text for a quote that says, “People often share personal information with ChatGPT without realizing that chat can be used as evidence in legal cases. This is from @businessbulls.in’s Instagram post linked in the text.
Data Privacy. ChatGPT learns and improves through data input. It also stores data that could be requested in circumstances such as legal cases. Sam Altman, CEO of OpenAI, cautioned that people are sharing high volumes of personal information to ChatGPT without considering privacy policies (see @businessbulls.in’s Instagram post about it). Any digital platform you use also has access to your data, so some of this is not new. Each platform presents its own issues regarding data use and privacy. It is something to be aware of and to actively make choices about.
Critical Thinking. One major hesitation about AI, especially in academic circles, is that more use of AI will lead to less use of critical and creative thinking skills. One MIT study reported on by TIME magazine supported these concerns. While some in academia have embraced AI, many worry over the long-term loss of critical thinking skills that are developed through activities AI seeks to replace such as brainstorming, researching, reading, writing, and problem-solving. For more resources that critically analyze generative AI, peruse this list.
Authorship. If AI is used during the writing process, should it be cited as a co-author, or at least as a reference? Who owns the rights to a written text? We often have collaborators in the writing process – reviewers, editors, friends – but also machines and technologies that “collaborate” with us in brainstorming, writing, creating, and revising. Questions of ownership are at stake, too, because generated AI programs are being trained with content owned by people who have not necessarily consented to their content – written or visual or otherwise – to be used in that way. Perhaps, our view of ideas like ownership, copyright, and authorship needs to change. Or, perhaps, we need to find a way forward that is more thoughtful, intentional, human-centered.
Thoughtful Use of AI
Generative AI is a digital tool to be used by people. By increasing our digital literacy skills, we can develop more thoughtful and careful use of AI. According to ChatGPT, “digital literacy is the ability to find, understand, evaluate, create, and communicate information using digital technologies. It’s more than just knowing how to use a computer or smartphone—it’s about using technology thoughtfully, effectively, and responsibly in everyday life” (OpenAI, 2025).
Critically consider what you put into it. Be wary of sharing personal information or potentially vulnerable experiences about yourself and your loved ones. Some people have reported distressful results of engaging with AI chats due to their sycophantic nature. The user is told what they want to hear, or is told certain responses based on the user’s prompt that encourage them in a certain way, similar to a yes man (watch a related reel here). Beyond these online safety issues, keep in mind that data is stored and available for AI training as well as for potential legal or law enforcement requests.
Critically consider what you get out of it. When you do use generative AI, actively involve yourself in the process. Open and check out sources it gives you, think critically about the information it offers, or doesn’t offer, and question signals of algorithmic bias. Seek out options for opting out or other data preferences. It is important for us as users and consumers to exercise our rights.If we just accept what is given to us by technology companies, we may find it harder and harder in the future to protect ourselves, to secure our own data, to choose what we want others to have access to.
We can increase our digital literacy skills through education, conversation, practice, and critical engagement. Use ChatGPT to learn about it. Ask it to summarize its own privacy policy and follow up with clarifying questions. Be open-minded and also be critically aware.
If you’d like to learn more about our resources on ethics and storytelling, follow us on Instagram.
Dr. Danielle Koepke is a content creator and consultant at Confianza Collective. She is also a teacher and researcher with expertise in digital storytelling, community health, and information literacy. To read more about Danielle’s work, check out her website.