The trouble with AI
Apr. 13th, 2025 07:26 amAAARGH. I just wanted chatgpt's help to structure a text. You know - what should be in the introduction, how long should each part be for easy reading, and so on. Unsurprisingly, I'm shit at this stuff, but usually, the AI is of great help - at least when it comes to nonfiction with clear structural requirements. (Letting the AI write texts is, of course, hopeless, so I won't even try. Letting the AI organize text structures before I just write stream-of-consciousness stuff, however? I mean, that could save me some headaches.) Trying to let it organize fiction, however? Wow. WOW. Today, I learned that chatgpt is really Very Fucking American.
Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.
Thanks for the non-help. -_-
Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.
Thanks for the non-help. -_-
no subject
Date: 2025-04-13 05:55 pm (UTC)Edit: I'm trying to figure out your rationale for this, because right now it looks like you're fictively categorizing different underlying mathematical/algorithmic types of machine learning/AI based on whether you approve of them or not (on ethical/other grounds) because I'm not seeing your explanation for a substantive categorical distinction based on algorithmic mechanism.
no subject
Date: 2025-04-13 06:29 pm (UTC)That said, it's worth categorizing technology on ethical grounds. Is the harm caused by the technology outweighed by the good that it does? If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified (although automating cancer diagnosis without trained professionals who could identify errors is not wise, and we should also question where the data comes from, so it's not without ethical concerns). If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified. If it's used to identify a bombing target in a high-rise building that results in the death of hundreds, the water use is the least of our problems. And these are all separate and important discussions to have.
no subject
Date: 2025-04-13 06:41 pm (UTC)Is the harm caused by the technology outweighed by the good that it does?
Then we need to nail down a way to quantify "harm" vs. "good" at the point where you're proposing a measurement, because naive analyses are likely to be badly misleading here. Kind of like the entire "paperless workflows will save resources" or the people who think that information "stored in the cloud" lives in the magical aether unicorn dust noƶsphere because obviously servers and software and connection uptime don't cost time and resources.
If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified
Devil's advocate: suppose that we hit a breakpoint of cancer-diagnosis AI specifically using up so much water that agriculture in [whatever] locations is impacted, resulting in food scarcity and irreversible nutritional deficiencies in [NUMBER] children. What then?
If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified.
Devil's advocate: what if sexy Garfield (...Garfield the orange cartoon cat?! I'm not going to Google for, uh, alternate sexy Garfields) movie is so screamingly profitable that the rights holder, who has a family member with XYZ rare cancer, donates 50% of the proceeds to cancer research, leading in 13 years to a cure for XYZ rare cancer? I would not bank on this specific scenario (although I did know a movie script writer who donated the bulk of his take to a spina bifida charity). But the truth is that if we're doing a cost-benefit analysis, we can't get a true idea of costs or benefits by stopping at "sexy Garfield is a stupid idea," we really are stuck with tracing out the (likely, assumed) second- and third-order consequences. And that turns into hell mode for analysis pretty rapidly.
What if Sexy Garfield inspires someone who's so disgusted by the entire enterprise that they invent a (fake, sci-fi) supervirus that wipes out all AI? What if Sexy Garfield bombs so badly that everyone agrees that "wow, sexy Garfield was stupid and AI is a waste of time" and all the venture capitalists pull out and there's no longer financial incentive for AI? What if Sexy Garfield is so popular and the venture capitalists so venturely capitally that they innovate on a new sustainable form of AI? We're really in the realm of me, a random no-credential sci-fi writer spitballing, but the fact is real life is full of weirdo cascading consequences and at least some of them can be predicted, calculated, estimated, prepared for with some spread of assumed/guessed probability.
no subject
Date: 2025-04-13 07:15 pm (UTC)I don't think defining "new" is nearly that hard. Is the thing generating a report on where it finds a pattern in medical data, or is the thing pretending to be a poet?