eller: iron ball (Default)
[personal profile] eller
AAARGH. I just wanted chatgpt's help to structure a text. You know - what should be in the introduction, how long should each part be for easy reading, and so on. Unsurprisingly, I'm shit at this stuff, but usually, the AI is of great help - at least when it comes to nonfiction with clear structural requirements. (Letting the AI write texts is, of course, hopeless, so I won't even try. Letting the AI organize text structures before I just write stream-of-consciousness stuff, however? I mean, that could save me some headaches.) Trying to let it organize fiction, however? Wow. WOW. Today, I learned that chatgpt is really Very Fucking American.

Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.

Thanks for the non-help. -_-

Date: 2025-04-13 04:50 pm (UTC)
castiron: cartoony sketch of owl (Default)
From: [personal profile] castiron
And it doesn't help the discussion when all sorts of different machine learning/AI are lumped in with generative AI.

Genealogy is one of my hobbies, and the use of AI to index millions of pages of handwritten records has been a huge advance. No one has the money to pay enough people to go through and transcribe/index all these documents, and even if they did or even if there were enough volunteers, it would take decades. Now, they're indexed, and I can search them and find relevant land or court records that I would've had to slog through page by page (if I even knew that they were relevant, because the person I'm looking for wouldn't have been in that deed book's index). Just this weekend I found the probate record for my 4x-great-grandfather that confirmed he was indeed my 4xggf and also gave names for four siblings of my 3xggf who I hadn't known about.

That's what I see as AI done right. The AI is being used to recognize and interpret handwriting and to index records in order to make them easier to find. While it's also being used to provide a transcription, the original record is shown alongside it, so I'm not dependent on the AI's transcription; I can read the original and decide for myself if it's relevant. I'm still the one doing the genealogy part; the AI's only acting as a helpful tool.

Date: 2025-04-13 05:36 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
No, and this is one of my pet peeves. I have friends who use machine learning in medical fields and there are some use cases there (though I think people need to be more careful and critical about it). But it's lumped in with ChatGPT, which as far as I can tell is not useful at all.

Date: 2025-04-13 05:55 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
...because at a base level, the underlying math is very similar. That would be the tech/math reason to classify them together.

Edit: I'm trying to figure out your rationale for this, because right now it looks like you're fictively categorizing different underlying mathematical/algorithmic types of machine learning/AI based on whether you approve of them or not (on ethical/other grounds) because I'm not seeing your explanation for a substantive categorical distinction based on algorithmic mechanism.
Edited Date: 2025-04-13 06:21 pm (UTC)

Date: 2025-04-13 06:29 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
They're both sorting through large amounts of data to detect patterns, but one type is claiming to...detect patterns. With the other one, the marketing claim is that it produces something new (hence the generative distinction).

That said, it's worth categorizing technology on ethical grounds. Is the harm caused by the technology outweighed by the good that it does? If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified (although automating cancer diagnosis without trained professionals who could identify errors is not wise, and we should also question where the data comes from, so it's not without ethical concerns). If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified. If it's used to identify a bombing target in a high-rise building that results in the death of hundreds, the water use is the least of our problems. And these are all separate and important discussions to have.

Date: 2025-04-13 06:41 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
We need to nail down a definition of "new," then. Are we talking purely about combinatorics or some higher de novo or ex nihilo standard? Because in sf/f writing (allegedly) by humans alone, the SCREAMING majority of sf/f SCREAMINGLY fails the higher bar, and I include myself. Music tends to be hard mode here in terms of combinatoric limitations: even if we restrict ourselves to Western tonal music, you fundamentally...only have...twelve notes...in the chromatic scale.

Is the harm caused by the technology outweighed by the good that it does?

Then we need to nail down a way to quantify "harm" vs. "good" at the point where you're proposing a measurement, because naive analyses are likely to be badly misleading here. Kind of like the entire "paperless workflows will save resources" or the people who think that information "stored in the cloud" lives in the magical aether unicorn dust noƶsphere because obviously servers and software and connection uptime don't cost time and resources.

If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified

Devil's advocate: suppose that we hit a breakpoint of cancer-diagnosis AI specifically using up so much water that agriculture in [whatever] locations is impacted, resulting in food scarcity and irreversible nutritional deficiencies in [NUMBER] children. What then?

If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified.

Devil's advocate: what if sexy Garfield (...Garfield the orange cartoon cat?! I'm not going to Google for, uh, alternate sexy Garfields) movie is so screamingly profitable that the rights holder, who has a family member with XYZ rare cancer, donates 50% of the proceeds to cancer research, leading in 13 years to a cure for XYZ rare cancer? I would not bank on this specific scenario (although I did know a movie script writer who donated the bulk of his take to a spina bifida charity). But the truth is that if we're doing a cost-benefit analysis, we can't get a true idea of costs or benefits by stopping at "sexy Garfield is a stupid idea," we really are stuck with tracing out the (likely, assumed) second- and third-order consequences. And that turns into hell mode for analysis pretty rapidly.

What if Sexy Garfield inspires someone who's so disgusted by the entire enterprise that they invent a (fake, sci-fi) supervirus that wipes out all AI? What if Sexy Garfield bombs so badly that everyone agrees that "wow, sexy Garfield was stupid and AI is a waste of time" and all the venture capitalists pull out and there's no longer financial incentive for AI? What if Sexy Garfield is so popular and the venture capitalists so venturely capitally that they innovate on a new sustainable form of AI? We're really in the realm of me, a random no-credential sci-fi writer spitballing, but the fact is real life is full of weirdo cascading consequences and at least some of them can be predicted, calculated, estimated, prepared for with some spread of assumed/guessed probability.

Date: 2025-04-13 07:15 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
Honestly, I think those kinds of thought experiments start to get into Effective Altruist/Rationalist territory. Eventually you get to "well what if AI could solve all of humanity's problems and upload us into a utopian singularity would it be worth it then" and that is just not useful on a policy level, because it's fantasy. We need to look at real consequences now, real places experiencing drought where these companies are building data centres, real economic damage caused by VC speculation, real layoffs because middle managers can't distinguish between good and good-enough news reporting, real misinformation, real dead kids in Gaza. Material reality, not what ifs.

I don't think defining "new" is nearly that hard. Is the thing generating a report on where it finds a pattern in medical data, or is the thing pretending to be a poet?

Date: 2025-04-13 06:32 pm (UTC)
castiron: cartoony sketch of owl (Default)
From: [personal profile] castiron
A commenter on another site referenced (but didn't cite) an effort to train an AI to recognize tuberculosis in chest X-rays; at first the AI seemed to be doing a great job, but someone realized that part of what the AI was picking up on was the metadata giving the age of the X-ray machine -- an older X-ray machine implied a poorer region and correlates with regions where TB's more common.

Now, that's something the researchers can fix now that they've realized it, and the AI may still be able to do a good job once the algorithm's been retrained, but it's not an instant magic solution.

I know some academics who are required to give their university an annual report on what they've done during the year. Often the report has a very specific format and takes a few hours to put together from scratch; they've sped up the process of creating that report by feeding the basic information into ChatGPT and getting back a draft that they can then quickly clean up. On the one hand, that's a use of it that's benefitted them; on the other hand, arguably it's a waste of time to have them do the report in the first place, or to have the faculty member put it together rather than having a departmental admin to do the job.

Profile

eller: iron ball (Default)
eller

December 2025

S M T W T F S
 1 23456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 9th, 2026 01:22 am
Powered by Dreamwidth Studios