The trouble with AI
Apr. 13th, 2025 07:26 amAAARGH. I just wanted chatgpt's help to structure a text. You know - what should be in the introduction, how long should each part be for easy reading, and so on. Unsurprisingly, I'm shit at this stuff, but usually, the AI is of great help - at least when it comes to nonfiction with clear structural requirements. (Letting the AI write texts is, of course, hopeless, so I won't even try. Letting the AI organize text structures before I just write stream-of-consciousness stuff, however? I mean, that could save me some headaches.) Trying to let it organize fiction, however? Wow. WOW. Today, I learned that chatgpt is really Very Fucking American.
Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.
Thanks for the non-help. -_-
Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.
Thanks for the non-help. -_-
no subject
Date: 2025-04-13 03:34 pm (UTC)Actually, I'd like to wait a bit with that judgment - because this is something that has been said about pretty much any new technology in the creative fields, including but not limited to nasty stuff like the printing press (!) that destroyed all the valuable skills that come from copying manuscripts by hand, which - of course - totally undermines memory and scholarship. (See, for example, Johannes Tritemius, De Laude Scriptorum Manualium, 1492) And, as far as I can tell, in the long run, neither the invention of the printing press nor later inventions have made humanity in general more stupid. So, as far as these claims go... I'm treating them with a healthy dose of skepticism, though I won't deny that new technology will likely change the way people think and reason. (The printing press definitely changed the very concept of academia.)
"I've yet to see a single use case in the creative fields."
Just an example, here... I've seen a lot of it in fashion design, actually - when it comes to questions like "how will this color look like in different lightings", it saves the designer the (suuuper shitty!) rote work of sewing the same dress in twenty near-identical colors and carrying those piles of dresses around to test them in different situations. Saving that step makes things infinitely more efficient - without hurting the actual creative and artistic work of the designer. I don't think it makes the designer a worse artist. (Also, they still need to learn to sew, and will spend a lot of time doing so, obviously. It's just that they get to sew twenty different projects instead of the same thing twenty times. Arguably, the variety actually increases their learning and will improve their skills, not diminish them...)
While we're talking about environmental resources: I believe the cost of AI use is still lower than, uh, twenty dresses that no one needs... These fabrics and dyes don't just appear from out of nowhere, either.
And I'm sure there are other examples that I have just not seen - probably also because things that go well (like streamlined design processes) don't generate nearly as much public attention as things that go wrong (like plagiarized school essays and uncanny awful "artwork").
"I don't think structure in writing is a rote task"
I think it depends on the situation and on what you are trying to do. At the high, artistic level, I agree - it is not a rote task at all.
"A better comparison, since you're also an artist, is "why should I learn how to do composition when the fun part is applying colour and details to a piece?" The answer is that structure, concept, and prose are inherently intertwined—form follows function."
I see what you mean! For someone who wants to be an artist, there's no shortcut around these things. However... Not everyone wants to be an artist - which is why there's also a huge market for coloring books, painting-by-numbers, and other stuff like that. And... I don't buy it, but I have no problem with the existence of painting-by-numbers, either. If a person's goal is not "creating art", but simply spend a relaxed evening having fun with colors, I think that's a perfectly legitimate purpose. It won't make that person an artist, but, as far as I'm concerned... whatever.
"but to learn how to do it so that you can take the kind of creative rule-breaking that makes writing actually interesting to read"
Absolutely crucial when you want to be a writer. Again, I totally agree! However, if the task is not "produce a wonderful novel that everyone will enjoy and feel deeply moved by", but simply "take this dump of assorted, chaotic research notes and sort them into something that others can read and understand, so I won't have to waste my valuable research time doing this annoying shit when I'm behind schedule already", or "read these 40000 badly written, redundant pages and summarize the author's key topics in five sentences so I can see whether I actually need to read their work or not, and I won't waste weeks just to find out it has nothing to do with my research topic", or "translate this paper for me so I won't have to learn Russian* just to find out what this author is saying"... I think that's where generative AI could really shine. If it were trained and used properly, anyway.
*By the way, I don't want to diminish the value of learning Russian (or other languages), either. I'm just saying it's not efficient to learn a complete new language every time you simply need fast access to a foreign publication. And that someone who tries that approach will maybe become a better scholar in the progress (actually... sure, they will), but also, never get anything done within a reasonable timeframe.
long-winded agreement
Date: 2025-04-13 04:59 pm (UTC)Why stop there, Eller. :) We could implicate the invention of writing (any writing system) as making people stupider. I was talking with Marie Brennan (an anthropologist) about writing techniques that descend from oral tradition as (probably) mnemonic aids (parallel structure in poetry, rhyme/meter, alliteration, assonance, kennings, whatever), memory palace techniques/method of loci.
As someone who peer-tutored Ivy League students writing academic essays during uni from 1998-2001, I have to say that stupidity/ineptness/inexperience at structuring even comparatively simple academic essays, let alone novels, cannot be localized to the advent of AI. The ways in which people struggle with this may be more exposed or differently exposed but again, teaching this as a cognitive skill is a surprisingly sticky problem. :]
For that matter, we could implicate written music notation vs. musicianship. Most serious classical (Western) musicians do have pretty serious ear training but it's also possible to be some kind of musician who's dependent on music notation rather than being able to play things by ear.
Re: long-winded agreement
Date: 2025-04-13 05:17 pm (UTC)Well, yes. And, in some ways, it does. I think Walter Ong started the academic debate about that topic... but also, having some illiterate family members, I can even personally confirm that, for example, the memory skills and spatial awareness skills of people who learned how to read are consistently much worse. Does that mean we should abort the idea of the written word? Uh. XDD
Re: long-winded agreement
Date: 2025-04-13 05:28 pm (UTC)no subject
Date: 2025-04-13 05:05 pm (UTC)There are actually valid points about the printing press—but more importantly, about new technologies that have been around longer than ChatGPT. Algorithmic social media has also made us think less well. The academic fraud that is Joseph Campbell and its evolution into the Pixar formula has made film and to a lesser extent commercial genre fiction less interesting. Etc. I know enough about how the technology works to confidently assert that it will not improve the arts or teach anyone how to be a better writer.
I don't understand, fundamentally, the desire to shortcut all the fun stuff in creative fields. To take a field I suck at, I would not see the point in using ChatGPT to make music. I have no musical talent myself, but all the joy of making music would theoretically be figuring out the making of music, not just getting a machine to spit out something that vaguely sounds like the thing in my head. Even as a hobbyist, it's just a fundamentally different thing that skates by the point of the thing itself.
no subject
Date: 2025-04-13 05:18 pm (UTC)If the bar is "it will not...teach anyone how to be a better writer," there are a kazillion hand-created-by-actual-human-writers tools/books/whatever that fail that bar too; so do we then get rid of those because they are a pox upon the house of human creativity?
(Ironically, I don't use ChatGPT because I got bored decades ago after two days with Eliza and extensive reading on Minsky, Schank, et al. I don't have an ethical problem with "vegan" or legally licensed AI tools.)
The homogenization of commercial narrative is a multi-pronged mess so I'm not going to touch that as there is not enough space in the margin.
no subject
Date: 2025-04-13 05:34 pm (UTC)In terms of who I am, eh, I'm someone who can now directly trace how my work was stolen without compensation or consent to make someone rich. So I do have a moral objection as well as an aesthetic one. People can have fun making TTRPG characters with picrew.me if they aren't interested in learning to draw.
And I've seen what students do with these tools. Even when it was "only" Grammarly, the willingness to cede authorial control to software made their work superficially more polished but resulted in sloppier writing and thinking.
This is just in art and writing. My partner teaches science, and recently had to contend with students insisting that HPV isn't a virus, because the AI told them that it wasn't. Even when he explained what it was and how it worked, they refused to believe him. This makes them less likely to get a lifesaving vaccine and more likely to die because they can't differentiate between machine hallucination and actual information. It's not just me and my ego and judgment, it's about how we learn—or don't—to think at a critical and structural level.
no subject
Date: 2025-04-13 05:47 pm (UTC)We have people believing things based on shitty snake oil advertisements - if you look at the history of medical advertising, you have people buying snake oil magnetized treatments for XYZ long before modern computers are around. Chad Orzel, who's a physics professor at Union College, talks about how he was bemused by the sudden
hand-writinghand-wringing [edit: fixed Freudian typo lol] from humanities colleagues around essay writing cheating because it's so much easier to cheat in a typical math/science exam, this is a Very Old (Sometimes Boring) Problem. People having to distinguish shitty information from good information in general is an Old Hard Problem. The prevalence of machine hallucination exposes that problem in deeply troubling ways, but it's not a new problem. I mean, Herodotus ffs.If we're looking at compensation schemes vs. IP theft, sure we can look at how tons of works (I think something like ~80 of my works turned up in that Atlantic database of stolen written works) are stolen without permission and used for profit; but this then ties into how the entire compensation system for creative narrative work has been in hell mode for a long time. No one ever adequately squared the circle regarding DRM vs. ebook pricing vs. ebook piracy. If we're at generalized compensation for narrative/creative work, capitalism has a ton of specific problems in this space, but also at the point where the Nibelungenlied has a whole fucking shout-out to PLEASE PAY UR LOCAL MINSTREL KTHX, the general problem of compensation predates capitalism by centuries.
no subject
Date: 2025-04-13 06:04 pm (UTC)But I do believe in affordances, and there are certain tendencies that the technology does encourage, in the same way that affordances in, say, algorithmic social media will lend themselves to bad political thinking over good political thinking. And that's where the latter assertion, that AI causes sloppy thinking rather than allowing the people who would be sloppy thinkers in any event to get away with it, is also something that I believe to be true.
I will say there's substantially more wiggle room in the latter argument. I've been researching the moral panic around cellphones and social media (curiously, in education, this is considered a much larger problem than ChatGPT), and I think the kids who are addicted to social media and phones would probably, in earlier ages, done other things to avoid learning. But having access to social media and phones is also more distracting to me, an adult who didn't own a phone until I was 30. So while I do think there's a moral panic, I also do think that designing apps that work like slot machines to exploit loopholes in human cognition probably results in behaviours that wouldn't otherwise happen.
Likewise, I have a rough idea of the curve of students who are good at writing and interested in learning versus the ones for whom it's pure hell and who will look for any reason to avoid it. If it were purely a matter of "AI makes it easier for people to cheat and be stupid," you'd think that the latter group would be the ones doing it the most. But it's actually the middle to high achievers, who normally might struggle through a difficult task, who are giving up and resorting to ChatGPT. That can't be divorced from other material conditions—namely, grade inflation and economic instability—but I am seeing the students who might otherwise learn often doing far worse because of the affordances of the technology. Some of the richer ones would have traditionally bought term papers, but most wouldn't have had that option, so now there's a whole cohort of kids who are weaker thinkers than they might otherwise be.
no subject
Date: 2025-04-13 06:11 pm (UTC)One could make the meta-argument that kids are getting smarter at weaseling out of requirements, for good or ill. :wry: A South Asian physicist I know recounted the gatekeeping ~high school final exam that was basically necessary to pass in order to have ANY kind of future in their country. There was a girl who was a very weak student, just needed to check the box for this exam and get on with her life. This physicist (well, before they became a physicist) - the teacher took them aside and said, "I'm putting her behind you because otherwise she's going to fail." The girl copied the physicist's answers and came in 3rd in the class, which no one believed; but she was able to go on and live her life. Physicist's note: "She did have one excellent academic skill. SHE COULD COPY LIKE THE WIND." Obviously I wasn't there and I haven't been to this country for that matter, but in broad strokes I could well believe that what we have is someone end-running around a fucked SYSTEM so she could (with the aid and abetment of people also forced through the system) move on with her life.
no subject
Date: 2025-04-13 05:41 pm (UTC)Uh, no, the professional versions run on good hardware are actually extremely good at it... O_O
"an actual fashion designer has training and understanding of light and colour and isn't sewing 20 identical dresses"
Having some close friends who are fashion designers, I'd say, an actual fashion designer has training and understanding of light and colour and is still willing to sew 20 identical dresses to get things just right. ;) Admittedly, this may not apply to Shein products, but... Sewing model versions is, to a fashion designer, the same thing that making sketches is to a painter.
You would also not say, "an actual painter has training and understanding of composition and color and won't make 20 sketches". Or, say, "an actual writer has training and understanding of language and plot and won't make 20 drafts of a novel". That's... kind of absurd. Trained people still do these things. Some more, some less, but still. Painting a picture that's just right without any discarded attempts is just as unlikely as writing a novel whose first draft is perfect without any editing: some people may be able too pull that off, but, um, almost no one does.
The problem specific to fashion design is that their "sketches" take up so much time and material (which is expensive and can really ruin fashion design students financially, while a failed sketch that almost but not quite works costs a painter only a sheet of paper) that anything that streamlines the process needs to be used.
"I don't understand, fundamentally, the desire to shortcut all the fun stuff in creative fields."
I don't really understand it, either - which is why I went to the effort to learn how to paint the hard way - but I'm acutely aware that not everybody wants to do the same. (Actually, from a rational point of view, investing all that work into something that doesn't pay is pretty stupid.) At some point, it also becomes a matter of privilege and accessibility: how many people can afford to spend years and years of their life learning a field from the ground up "just for fun"? I mean, it's great when they do - but also, there are people who just want to have a colorful painting in their living room to make it look a bit nicer and can neither afford the money to pay a professional artist nor the time to really learn stuff from zero. I won't judge.
no subject
Date: 2025-04-13 05:56 pm (UTC)ETA: Actually, physical disability generally I'd consider a likely ethical use case. There was a point in time I could not draw a straight line or paint very well because I had a significant hand tremor caused by a medication side effect. As a microcosm of this, a lot of digital painting programs have a "stroke stabilization" setting as an aid for people with weaker hand-eye. I don't think this is a bad thing generally: yes, an artist who wants to do traditional media will work on that skill (or route around it), but for people who have physical limits around hand-eye, I don't have a problem with this myself.
no subject
Date: 2025-04-13 05:36 pm (UTC)no subject
Date: 2025-04-13 05:54 pm (UTC)