The trouble with AI
Apr. 13th, 2025 07:26 amAAARGH. I just wanted chatgpt's help to structure a text. You know - what should be in the introduction, how long should each part be for easy reading, and so on. Unsurprisingly, I'm shit at this stuff, but usually, the AI is of great help - at least when it comes to nonfiction with clear structural requirements. (Letting the AI write texts is, of course, hopeless, so I won't even try. Letting the AI organize text structures before I just write stream-of-consciousness stuff, however? I mean, that could save me some headaches.) Trying to let it organize fiction, however? Wow. WOW. Today, I learned that chatgpt is really Very Fucking American.
Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.
Thanks for the non-help. -_-
Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.
Thanks for the non-help. -_-
no subject
Date: 2025-04-13 04:50 pm (UTC)Genealogy is one of my hobbies, and the use of AI to index millions of pages of handwritten records has been a huge advance. No one has the money to pay enough people to go through and transcribe/index all these documents, and even if they did or even if there were enough volunteers, it would take decades. Now, they're indexed, and I can search them and find relevant land or court records that I would've had to slog through page by page (if I even knew that they were relevant, because the person I'm looking for wouldn't have been in that deed book's index). Just this weekend I found the probate record for my 4x-great-grandfather that confirmed he was indeed my 4xggf and also gave names for four siblings of my 3xggf who I hadn't known about.
That's what I see as AI done right. The AI is being used to recognize and interpret handwriting and to index records in order to make them easier to find. While it's also being used to provide a transcription, the original record is shown alongside it, so I'm not dependent on the AI's transcription; I can read the original and decide for myself if it's relevant. I'm still the one doing the genealogy part; the AI's only acting as a helpful tool.
no subject
Date: 2025-04-13 05:36 pm (UTC)no subject
Date: 2025-04-13 05:55 pm (UTC)Edit: I'm trying to figure out your rationale for this, because right now it looks like you're fictively categorizing different underlying mathematical/algorithmic types of machine learning/AI based on whether you approve of them or not (on ethical/other grounds) because I'm not seeing your explanation for a substantive categorical distinction based on algorithmic mechanism.
no subject
Date: 2025-04-13 06:29 pm (UTC)That said, it's worth categorizing technology on ethical grounds. Is the harm caused by the technology outweighed by the good that it does? If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified (although automating cancer diagnosis without trained professionals who could identify errors is not wise, and we should also question where the data comes from, so it's not without ethical concerns). If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified. If it's used to identify a bombing target in a high-rise building that results in the death of hundreds, the water use is the least of our problems. And these are all separate and important discussions to have.
no subject
Date: 2025-04-13 06:41 pm (UTC)Is the harm caused by the technology outweighed by the good that it does?
Then we need to nail down a way to quantify "harm" vs. "good" at the point where you're proposing a measurement, because naive analyses are likely to be badly misleading here. Kind of like the entire "paperless workflows will save resources" or the people who think that information "stored in the cloud" lives in the magical aether unicorn dust noƶsphere because obviously servers and software and connection uptime don't cost time and resources.
If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified
Devil's advocate: suppose that we hit a breakpoint of cancer-diagnosis AI specifically using up so much water that agriculture in [whatever] locations is impacted, resulting in food scarcity and irreversible nutritional deficiencies in [NUMBER] children. What then?
If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified.
Devil's advocate: what if sexy Garfield (...Garfield the orange cartoon cat?! I'm not going to Google for, uh, alternate sexy Garfields) movie is so screamingly profitable that the rights holder, who has a family member with XYZ rare cancer, donates 50% of the proceeds to cancer research, leading in 13 years to a cure for XYZ rare cancer? I would not bank on this specific scenario (although I did know a movie script writer who donated the bulk of his take to a spina bifida charity). But the truth is that if we're doing a cost-benefit analysis, we can't get a true idea of costs or benefits by stopping at "sexy Garfield is a stupid idea," we really are stuck with tracing out the (likely, assumed) second- and third-order consequences. And that turns into hell mode for analysis pretty rapidly.
What if Sexy Garfield inspires someone who's so disgusted by the entire enterprise that they invent a (fake, sci-fi) supervirus that wipes out all AI? What if Sexy Garfield bombs so badly that everyone agrees that "wow, sexy Garfield was stupid and AI is a waste of time" and all the venture capitalists pull out and there's no longer financial incentive for AI? What if Sexy Garfield is so popular and the venture capitalists so venturely capitally that they innovate on a new sustainable form of AI? We're really in the realm of me, a random no-credential sci-fi writer spitballing, but the fact is real life is full of weirdo cascading consequences and at least some of them can be predicted, calculated, estimated, prepared for with some spread of assumed/guessed probability.
no subject
Date: 2025-04-13 07:15 pm (UTC)I don't think defining "new" is nearly that hard. Is the thing generating a report on where it finds a pattern in medical data, or is the thing pretending to be a poet?
no subject
Date: 2025-04-13 06:32 pm (UTC)Now, that's something the researchers can fix now that they've realized it, and the AI may still be able to do a good job once the algorithm's been retrained, but it's not an instant magic solution.
I know some academics who are required to give their university an annual report on what they've done during the year. Often the report has a very specific format and takes a few hours to put together from scratch; they've sped up the process of creating that report by feeding the basic information into ChatGPT and getting back a draft that they can then quickly clean up. On the one hand, that's a use of it that's benefitted them; on the other hand, arguably it's a waste of time to have them do the report in the first place, or to have the faculty member put it together rather than having a departmental admin to do the job.
no subject
Date: 2025-04-13 06:44 pm (UTC)Well, yes, they could do something that actually benefits humanity in that time. XD Excellent use of resources...