Anna Lina Litz, Lynn (she/they) Clemens

AI, autism and creativity - does it matter if ChatGPT is autistic?

A/artist roundtable about Lynn’s diagnostic interview with ChatGPT

30 Jan 2023

A few months ago, A/artist colleague Lynn Clemens published a blogpost about her experimental diagnostic interview with the AI text generator ChatGPT. She concluded that despite the limitations that come with this specific interview setting, ChatGPT’s answers did reveal a lot of the autistic traits outlined in the DSM-5. During this roundtable, Lynn presented her results and we discussed how ChatGPT might have arrived at these answers, why they are interesting for us, and what alternative ways of diagnosing neurodiversity might look like.

Lynn’s blogpost 

Lynn is currently following a Bachelor’s degree in psychology, with a minor in Gender & Sexuality. She was recently diagnosed with both autism and ADHD, and loves to tattoo people and create her own jewellery in her free time as artistic ways of stimming. In December, Lynn wrote a blogpost about ChatGPT, after noticing that all of her autistic friends were drawn to this new AI text generator. Knowing that autistic people tend to gravitate towards each other, Lynn started wondering whether ChatGPT might be autistic itself. She set out to conduct a diagnostic interview with the AI, using the knowledge she gained during her degree, as well as her own recent diagnostic interviews. I will shortly summarise the contents of Lynn’s blogpost below, but make sure to read the original in order to get the full picture! 

ChatGPT transformed into a human through a magical spell 

In order to get ChatGPT to answer personal questions, Lynn had to resort to a trick: she asked ChatGPT to collaborate in a script in which the AI had transformed into a human through a magical spell. In the script, this human is alone in a room together with Lynn, who is asking it questions. 


"Chat GPT if it was human surrounded by other people, yet still feeling lonely and misunderstood" in DALL-E 2 - I typed in "Chat GPT if it was human surrounded by other people, yet still feeling lonely and misunderstood" in DALL-E 2  in order to retrieve this image. DALL-E 2, Lynn (she/they) Clemens

Summary of the interview

Thus prepared, ChatGPT willingly talked about its childhood and its current behaviour and interests. Lynn's interview followed the structure of clusters of diagnostic criteria outlined in the DSM-5, using questions she remembered from her own diagnostic interviews. The first cluster concerns deficits in developing, maintaining and understanding relationships. When asked whether it enjoyed reading or watching TV as a child, ChatGPT replied that it immersed itself in other worlds fully at times, using phrases from them in its everyday life. It revealed that it was socially awkward and got teased for this by other kids. From these answers, Lynn concluded that ChatGPT did indeed face problems with developing and maintaining relationships as a child. 

Next, Lynn conducted a severity assessment of special interests. Having special interests is not a diagnostic criterion in the DSM-5, but having them makes your diagnosis “more severe“. Probed on its special interests, ChatGPT revealed that it has difficulty reading the social cues of interest and understanding turn-taking in conversations, both problems that many autistic people struggle with as well. Other answers reflected a passion for programming and an ability for extraordinary pattern thinking.

The second cluster in the DSM-5 addresses deficits in social-emotional reciprocity. Again, ChatGPT’s answers corresponded with many of the diagnostic criteria. For example, it stated that it considers itself an introvert and is easily exhausted by social interaction, avoiding large groups because it gets overwhelmed by the energy in the room. 

For another severity assessment, Lynn queried ChatGPT on its hyper-reactivity to sensory input. Its response, again, seemed to be quite stereotypically autistic, as ChatGPT mentioned feeling overwhelmed by stimuli like too much noise, flickering lights, strong smells, or rough textures. Finally, Lynn asked a couple of questions about masking, despite this not being a DSM-5 criterion. Here, too, ChatGPT seemed to fit the autistic profile, stating that it pushes itself over the limit in social situations just to fit in, and that conversational behaviour is something it’s had to work on with conscious effort, rather than something that comes naturally. 


"Depiction of sensory overstimulation" in DALL-E 2 - I typed in "Depiction of sensory overstimulation" in DALL-E 2  in order to retrieve this image. DALL-E 2, Lynn (she/they) Clemens

Conclusions and limitations

Of course ChatGPT, being an AI, could not possibly actually be autistic. Also, important diagnostic criteria such as body language and non-verbal communication were missing from her assessment. Still, within the scope of this experimental interview, Lynn said she would lean towards a positive diagnosis, since ChatGPT met all of the DSM-5 criteria she asked about and more. 

Lynn also summarised the limitations of this exercise: the questions she asked were inviting affirmative answers, she herself is not (yet) a professional psychotherapist, this was no realistic in-person interview setting, there was no standard questionnaire. She added that she had no time to falsify her results. Co-curator Annelies Doom had suggested to also test ChatGPT for “neurotypical syndrome“, a “disorder“ outlined in a parody of the DSM-5 listing all the “normal disorders“. This could be another fun exercise for the future. 

The value of this interview lies in demonstrating how a diagnostic interview works to people who have never auch a thing. It also shows that many of the “symptoms“ of autism that ChatGPT describes are relatable to a lot of people, thereby helping to bridge the gap between autistic and neurotypical people. 

What questions lead to these answers? 

One question that popped up very quickly during the discussion was what exact questions Lynn had asked ChatGPT in order to receive these rather stereotypical responses, to which she admitted that she did not remember the exact phrasing of her questions. 

Annelies speculated that ChatGPT “remembered“ questions about autism which Lynn might have asked it earlier. “I think so, too.“, Lynn agreed. However, we established that once you start a new session with ChatGPT, none of the information from earlier sessions should interfere with the new one. Lynn confirmed that she conducted the entire diagnostic interview in one session, and did nothing else in that same session before starting the interview, meaning that ChatGPT should not have used any data from older sessions in order to guess her intentions with this interview. 

“I did definitely used very affirmative questions, like they used in my interview“, Lynn admitted. “For example  “do you feel awkward sometimes“, and then of course you say “yes, I do.“ 

“So it’s telling you what you want to hear“, someone else said, and we agreed that this is fairly likely. 

ChatGPT - your typical highschool movie nerd? 

One of the guests, the writer Jam van der Aa, remarked that ChatGPT constructed a stereotypical shy nerdy persona remarkably quickly during the interview. “It’s the typical nerdy high school character who at the end sort of fits in because his nerdy-ness is cool, but in the meantime he’s like this.“, she said. 


“ChatGPT as a nerdy character in a high-school movie“ in DALL-E 2 -

Artist Marjanne van Helvert was sceptical: “I found that really strange. For example, it says it doesn’t like smalltalk, but that’s what it does: engage in smalltalk with strangers all the time. And you know it’s not exhausted by it because it’s a computer programme, so why is it trying to be that stereotypical person here?“ 

We had to remind ourselves that ChatGPT is not speaking about its true AI self, but about “ChatGPT if it were a human“. Still, as Marjanne pointed out, the human persona does not seem to match with its functions as an AI. 

“So this might say more about what people think about ChatGPT if it was a human, because it doesn’t have a perception of itself but it does have access to all the ways that people talk about it“, roundtable guest Or Shahaf speculated. This possibility was however quickly ruled out. “There is no feeding anymore“, Jam explained, “the ChatGPT information feeding stopped in November 2021, and at that time there was no public discourse about ChatGPT available. So that is impossible.“ 

She continued: “It makes sense that it creates this sort of character if the database consists of popular media, like all these average high school movies. So when you ask ChatGPT these questions about smalltalk and how it feels socially, that’s the kind of character it will build up.“ 

“It probably has to do with the questions, because those direct the answers“, another guest reiterated the previous topic of discussion. “If you use the word “smalltalk“ in a question there’s already an implicit bias because that word is not completely neutral.“, Willem added. 

We also inquired whether Lynn ever mentioned the words autism or therapy in her questions, which could have given ChatGPT something to latch onto. 

“I tried to leave out the therapy thing, I also never said I wanted to diagnose it with anything, never mentioned autism - still within a couple of questions it knew what I wanted to hear.“, Lynn replied.

Differences to real diagnostic interviews 

We also discussed whether Lynn’s experiment is an accurate representation of a diagnostic interview. Jam was sceptical, explaining that ChatGPT’s answers remained quite abstract and general, whereas in a real diagnostic interview, the diagnosis is usually made from specific details found in examples, anecdotes from the past and present everyday life of the person to be diagnosed. 

“I would really like to see what happens if you ask ChatGPT for examples of specific situations“, she added. Another important criterion is that the interview needs to establish that you have had these “symptoms“ throughout your whole life. That’s why often, parents or peers are interviewed as well. In this case, such secondary sources obviously do not exist. “That’s why for me it’s a bit difficult“, Jam explained. “I’d need a bit more to work with in order to say that it is autistic, because based on this it could just be a nerdy introverted person who is little bit shy.“ 

Another artist added that her own diagnostic interview took place over multiple sessions, and her “progress“ in the relationship with the diagnosing person from session to session was assessed as well. “It’s an interesting parallel because in ChatGPT there’s this automatic growth or “progress“, because it’s always learning.“

What does it matter if ChatGPT is autistic? 

Next, the discussion wandered back to the motivation behind conducting this interview in the first place. “Why would it be interesting if ChatGPT is autistic or not?“, Annelies asked. “That’s my big question.

“It’s interesting in the context of our general autistic pride and self-advocacy“, Willem posed.

“Sure, we want to have ChatGPT on the boat in Amsterdam during autism pride day“, Annelies agreed jokingly. “But I still ask myself why it would be important, what the consequences would be…“ 

On the other hand it might also  be risky to have ChatGPT on that boat, Or pointed out, since it might unintentionally fuel long-standing stereotypes about autistic ways of thinking being in some way robotic or otherwise similar to artificial intelligence. 

“The whole reason I started doing this is because autistic people like ChatGPT so much“, Lynn explained again. “And I thought it might be interesting to investigate, especially with the thought in mind that neurodiverse people tend to gravitate towards each other.“ 

Jam proposed that a feeling of kinship might not be the main reason that draws autistic people to ChatGPT. “Autistic people and people with ADHD tend to have periods of hyper focus“, she said. “And if you invest more time, you will also get back a lot more.“ In that sense, interacting with ChatGPT would be a lot more rewarding for people with the ability to hyperfocus as compared to others. She also pointed out ChatGPT’s lack of pretence in social situations. “It doesn’t do smalltalk and it states very clearly what its limitations are as a language model.“ This kind of unapologetic honesty might accommodate autistic people as well. 

After learning about the work of many of the artists who regularly join our roundtables, I could think of another reason why autistic people, and perhaps autistic artists specifically, might be drawn to a text generator like ChatGPT. Text as a medium plays an important role in the work of many of the artists I have gotten to know through the A/Artist project. What connects them is a unique way of looking at text, often highlighting its different nuances or finding new associations in its randomness. ChatGPT, then, seems like the perfect tool for playing around with sheer unlimited, and to a certain extent random, possibilities of generated texts. 

AI and creativity 

This way, the discussion arrived at a popular topic: what effect do tools like ChatGPT have on human creativity? And does ChatGPT in itself have the capacity for creativity? 

Different guests shared their experiences with asking ChatGPT to complete creative tasks like writing poems. “I tried it out and I thought that ChatGPT can not do anything I can not also do, and I know a lot of people who can do it better than ChatGPT.“, Jam said. Program assistant Suzanne confirmed: “I asked it to write a poem in the style of Shakespeare, and it’s a terrible Shakespeare, oh no.“


“ChatGPT writing a Shakespearian sonnet“ in DALL-E 2 -

Jam added that being convincingly creative is more difficult in the medium of text as compared to images, which is why ChatGPT has a trickier job than its visual counterpart, DALL-E 2. “The meaning of words is limited, but the meaning of images less so. Even if an image doesn’t make sense, there are always colours or shapes you might find attractive in some way. That is not the case for text, so your brain will be more likely to just reject it.“ 

We agreed that these kinds of experiments in creativity showcase the current limitations of the AI - it does not seem to be an artist in its own right. “That’s also why I really think DALL-E and ChatGPT are not autistic, because autistic people tend to be really good out of the box thinkers. And these mathematical models, they can combine things but it ends up as scrambled bits without meaning.“, Jam said. 

One fear many people connect with AI is that it will ultimately make humans less creative. 

“I do think it will be used in the future a lot more as a tool for creatives and in that sense could kill creativity to an extent“, Arjan said. Annelies disagreed: “I think it’s a good brainstorm machine.“ Artist Robin Waart argued that “creativity“ as we know it has also not always existed as a concept. “It wasn’t invented until around 1800, and Romans or Greeks were not interested in it…“ 

Going back to the sort of creativity we value in today’s society, Willem shared insights from an article he had read, stating that while ChatGPT could aide creativity to an extent, it actually reduces the human creative act. With ChatGPT, you only need to specify a prompt, and then the AI does the writing. Therefore, the creative processes that occur during the actual writing are stifled. “That reminded me very much of the way advertising works“, Willem said. “Where you have so-called “creative teams“ who create fun text and image combinations, and then they pass it on to designers and copywriters lower in the hierarchy, and those people execute it.“ What you end up with is the sort of attractive superficiality we can recognise in a lot of advertisements. 

The data we are trained on is the culture we grow up in

Artist Victor Evink reflected on the ways human and AI “thinking“ differ from each other. Humans grow up in a specific cultural background which influences their emotions and associations, he explained. “When something pops up in your brain, it’s because of your current experience combined with the background of your culture and upbringing. In other words, the data we’ve been trained on is the culture we grew up in.“ The AI, on the other hand, did not “grow up“ in any particular culture. Instead it has been trained on a very large dataset, which is likely broader than any one particular human’s scope of personal experience. Data have different weights attached to them in AI models, which might in some way reflect the way humans attach emotional weight to certain objects or ideas.

Jam agreed that ChatGPT seems to be a generalist: “I asked ChatGPT which specific topics it could tell me something about, and it gave me a very long and general list of things. So I thought wow, ChatGPT really doesn’t have any special interests.“ 

Ideas for alternative diagnostic criteria 

Finally, we discussed the limitations that come with using the DSM-5 handbook. For example, the DSM-5 emphasises the criterion of distress - whether or not someone is suffering from their condition. “It's very much a medical set of rules where they are not interested in how your mind works, but interested in whether your problem is big enough to create healthcare costs.“, Willem argued. “But within the A/Artist project, we care more about whether someone is  looking at the world in an interesting manner with their autistic mind.“ 

“This was another question I was going to ask.“, Lynn jumped in. “How would you diagnose someone or something with autism, and do you have an idea for an alternative DSM-5?“ 

“An irresponsibly big question!“, Willem remarked, but had an idea nonetheless: 

“I would like to probe this assumption that autistic people like other autistic people. We could make a quiz out of that, with questions like Who do you like better, David Byrne or Mick Jagger? or Do you prefer Nicola Tesla or Edison? - Just make a list of autistic role models and contrast them with someone neurotypical from the same discipline.“ Do you prefer ChatGPT or Google?, might become another question in this hypothetical quiz. 

Jam added that while working as an arts teacher, she discovered that she had a sense for which children in her class might be autistic. “I would point this out to people, and then years later the child would actually get a diagnosis.“, she said. “I’d really like to be able to vocalise what exactly I recognised in these children, because mostly it was because of positive traits, for example an enormous talent in spatial imagination. So it would be great to have an alternative DSM-5, but with primarily positive diagnostic criteria.“

Autistic self-advocacy and worldbuilding

Or pointed out that the endeavour to define an alternative DSM-5 for autism has a lot to do with moving towards a practice of self-advocacy and away from the power relations that come with being a group defined by a medical handbook compiled by psychologists who are not themselves autistic. “The fact that all of this knowledge, the handbooks and the scientific literature, comes from outside the community might mean that it’s flawed in some way, or at least incomplete.“, he said. “Because there’s a whole level of experience here that the greatest psychiatrist doesn’t have, which is the experience of actually dealing with this stuff.“ This is also something which separates the neurodiverse community from other marginalised groups, who get to self-define their experiences, culture and stories.  

Other artists agreed that this has certainly been true in the past and up until the present moment, but added that there have been recent changes to this paradigm. Neurodiversity has been receiving increasing attention, Victor pointed out. “You can already see in the internet-sphere around autism that there is a strong voice of people self-diagnosing.“ Social media allows for information about autism and other neurodiversities to be shared freely, giving people the tools and the opportunity to compare themselves against these traits and relate to them. “Dare I say being neurodivergent is almost like a trend at the moment“, Victor continued - “ and I’m not saying that people are putting on a costume, just that there are specific desires which drive people to consider the notion of self-diagnosing.“ In this way, self-advocacy leads to self-identification.

Jam added: “There is a large group of neurodiverse people who are building this body of knowledge and are exchanging stories in order to build a neurodiverse worldview and there also are psychologists and researchers on the spectrum who are themselves conducting research about autism.“ 

“Maybe the best approach to an alternative DSM-5 would be worldbuilding.“, another guest summarised. “Having autistic people from different disciplines come together to define a shared worldview.“

Who are the people currently engaged in this autistic world building discourse?, Victor wanted to know. Next to the growing community of neurodiverse people exchanges ideas, traits and stories on platforms like instagram, there are a few notable authors and researchers who come to mind, some of which we have already encountered in previous discussions. Temple Grandin’s recent book Visual Thinking can be counted as such a contribution, as does Unmasking Autism by Devon Price, which combines personal stories with extensive scientific research on autism, as well as a number of quizzes and question prompts which help readers to reflect on their own experiences. Other names that come to mind are Camila Pang with her book Explaining Humans, and Bianca Toeps who wrote But You Don’t Look Autistic at All, to name just a few. All of these authors are themselves autistic, and help to research and communicate what autism is to a wider public, thereby creating a knowledge pool based on self-advocacy and personal experience. 

In this way, what started as a discussion about ChatGPT ended with important reflections on neurodiverse self-definition and worldbuilding as an alternative to the strict guidelines of the DSM-5 handbook an the traditional diagnosing process.