eller: iron ball (Default)
[personal profile] eller
AAARGH. I just wanted chatgpt's help to structure a text. You know - what should be in the introduction, how long should each part be for easy reading, and so on. Unsurprisingly, I'm shit at this stuff, but usually, the AI is of great help - at least when it comes to nonfiction with clear structural requirements. (Letting the AI write texts is, of course, hopeless, so I won't even try. Letting the AI organize text structures before I just write stream-of-consciousness stuff, however? I mean, that could save me some headaches.) Trying to let it organize fiction, however? Wow. WOW. Today, I learned that chatgpt is really Very Fucking American.

Things I learned:
- The AI will not just try to reorganize the plot around an acceptable novella structure (which, after all, is what I asked it to do) but flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty and I should consider editing that. I'm like... WAIT... I'm actually writing a Fantasy!Medieval!North!Germany setting. With Fantasy!Medieval!North!German characters with according cultural background and mindset. (Come on. It's fucking Germany. At least some of the characters take their oaths seriously...) Apparently, Germany written by a German is not acceptable by genre standards...
- The AI completely unasked (!) changed a scene description from a male character making tea for the group to a female character making the tea. Thanks for the casual sexism, I guess.
- The AI described a female character as "flirtatious". She's... not. She is, however, speaking to male characters. In, you know, plot-related ways. Apparently, that's yet another thing the AI can't handle. (Not a problem with the technology itself, I know, but definitely with the training dataset. WTF.)
- The AI completely unasked (!) tried to give a genderfluid character an issuefic subplot centered around Gender!Angst!American!Style. I mean, I onbviously don't expect an American piece of software to understand historical German ways of gender expression... which is why I didn't ask it to. This character has a perfectly acceptable subplot centered around military technology and espionage, and.no gender issues whatsoever, thanks.
- The AI really wants to change the magic system (which is, of course, North German as fuck, considering the setting) to something ripped off Tolkien.
- The AI is shit at interpreting character motivations in ways that are actually pretty hilarious.

Thanks for the non-help. -_-

Date: 2025-04-13 10:53 am (UTC)
cyphercrypt: No copyright infringement intended (Default)
From: [personal profile] cyphercrypt
Are you using ChatGPT4o? Are you subscribed like I am? There are many AI presets you can access if you are subscribed, but I do not know if they're worth it.

Date: 2025-04-13 10:41 pm (UTC)
cyphercrypt: No copyright infringement intended (Default)
From: [personal profile] cyphercrypt
I would check the differences between the paid and unpaid versions. IMO, chatgpt 4o is an indispensable study buddy, but I'm not sure what else it is good for.

Date: 2025-04-13 12:39 pm (UTC)
sabotabby: (doom doom doom)
From: [personal profile] sabotabby
This is not even the main problem with GenAI. The main problem is that it exists at all.

Date: 2025-04-13 02:40 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
Conceptual doesn't matter when it destroys the environment, livelihoods, makes people think and reason less well, creates disinformation, and produces nothing of value. I've yet to see a single use case in the creative fields.

I don't think structure in writing is a rote task the way calculations or weaving could be (though I think it's very important to learn how to calculate—my evidence is that my students can't do any sort of advanced math or mathematical reasoning because people decided that rote learning is bad). A better comparison, since you're also an artist, is "why should I learn how to do composition when the fun part is applying colour and details to a piece?" The answer is that structure, concept, and prose are inherently intertwined—form follows function. Structure in writing is an important part of writing, and if you suck at it (which I also do, and most people do, until they git gud), the way to improve is not to hope that a plagiarism machine can do it for you, but to learn how to do it so that you can take the kind of creative rule-breaking that makes writing actually interesting to read.

long-winded agreement

Date: 2025-04-13 04:59 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
this is something that has been said about pretty much any new technology in the creative fields, including but not limited to nasty stuff like the printing press (!) that destroyed all the valuable skills that come from copying manuscripts by hand

Why stop there, Eller. :) We could implicate the invention of writing (any writing system) as making people stupider. I was talking with Marie Brennan (an anthropologist) about writing techniques that descend from oral tradition as (probably) mnemonic aids (parallel structure in poetry, rhyme/meter, alliteration, assonance, kennings, whatever), memory palace techniques/method of loci.

As someone who peer-tutored Ivy League students writing academic essays during uni from 1998-2001, I have to say that stupidity/ineptness/inexperience at structuring even comparatively simple academic essays, let alone novels, cannot be localized to the advent of AI. The ways in which people struggle with this may be more exposed or differently exposed but again, teaching this as a cognitive skill is a surprisingly sticky problem. :]

For that matter, we could implicate written music notation vs. musicianship. Most serious classical (Western) musicians do have pretty serious ear training but it's also possible to be some kind of musician who's dependent on music notation rather than being able to play things by ear.

Re: long-winded agreement

Date: 2025-04-13 05:28 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
Google Maps and/or auto-navigation has definitely made people in my life shit at reading physical maps. I'm shit at maps and I'm still better at it (because I had to navigate for Joe and me on paper maps) than my daughter.

Date: 2025-04-13 05:05 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
I would be surprised re: the colour stuff, because the art version of the plagiarism machine sucks at colours. As with writing, it does a good job of producing slop that superficially looks good, but every time I've seen one of these renderings, the colour and lighting is chaotic. If your goal is mass-produced Shein crap I guess it's useful for advertising, but an actual fashion designer has training and understanding of light and colour and isn't sewing 20 identical dresses.

There are actually valid points about the printing press—but more importantly, about new technologies that have been around longer than ChatGPT. Algorithmic social media has also made us think less well. The academic fraud that is Joseph Campbell and its evolution into the Pixar formula has made film and to a lesser extent commercial genre fiction less interesting. Etc. I know enough about how the technology works to confidently assert that it will not improve the arts or teach anyone how to be a better writer.

I don't understand, fundamentally, the desire to shortcut all the fun stuff in creative fields. To take a field I suck at, I would not see the point in using ChatGPT to make music. I have no musical talent myself, but all the joy of making music would theoretically be figuring out the making of music, not just getting a machine to spit out something that vaguely sounds like the thing in my head. Even as a hobbyist, it's just a fundamentally different thing that skates by the point of the thing itself.

Date: 2025-04-13 05:18 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
Okay, but I also know enough about how the technology works and I am also a writer and I have also taught writing, if we're now arguing from authority, so I'd like to raise your "I can confidently assert": I can also assert that for some people, this is a useful tool. Genuinely, who are you to decide where the locus of "fun stuff" or creativity is for people who aren't you (or your students)? How would you then categorize aleatoric or algorithmic music created/designed/coded in a tool like Csound or Cockos Reaper? Or a drag-and-drop video-game-making tool?

If the bar is "it will not...teach anyone how to be a better writer," there are a kazillion hand-created-by-actual-human-writers tools/books/whatever that fail that bar too; so do we then get rid of those because they are a pox upon the house of human creativity?

(Ironically, I don't use ChatGPT because I got bored decades ago after two days with Eliza and extensive reading on Minsky, Schank, et al. I don't have an ethical problem with "vegan" or legally licensed AI tools.)

The homogenization of commercial narrative is a multi-pronged mess so I'm not going to touch that as there is not enough space in the margin.

Date: 2025-04-13 05:34 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
I don't know enough about music production or gaming to say. In terms of the how-to-write books, the difference I see is that when one fails, it's critiqued as a failure. If it's useful, people buy it, read it, and use it; if it's useless, they mostly don't. This is a strangely capitalistic argument for me, I know, but even the worst how-to can at least generate discussion. Even the worst book wastes no more trees than a useful book, and hasn't profited by intellectual property theft. It's not forced on anyone, nor do billionaires pump money and resources into making it more influential or widely adopted than it would be on its own merits.

In terms of who I am, eh, I'm someone who can now directly trace how my work was stolen without compensation or consent to make someone rich. So I do have a moral objection as well as an aesthetic one. People can have fun making TTRPG characters with picrew.me if they aren't interested in learning to draw.

And I've seen what students do with these tools. Even when it was "only" Grammarly, the willingness to cede authorial control to software made their work superficially more polished but resulted in sloppier writing and thinking.

This is just in art and writing. My partner teaches science, and recently had to contend with students insisting that HPV isn't a virus, because the AI told them that it wasn't. Even when he explained what it was and how it worked, they refused to believe him. This makes them less likely to get a lifesaving vaccine and more likely to die because they can't differentiate between machine hallucination and actual information. It's not just me and my ego and judgment, it's about how we learn—or don't—to think at a critical and structural level.
Edited Date: 2025-04-13 05:35 pm (UTC)

Date: 2025-04-13 05:47 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
But again, people have been stupid since time immemorial. If the argument is "AI makes it easier in scale for people to cheat or be stupid," that's one argument; if t he argument is "AI causes people to cheat or be stupid," that is a different argument. Which assertion is the one you are making?

We have people believing things based on shitty snake oil advertisements - if you look at the history of medical advertising, you have people buying snake oil magnetized treatments for XYZ long before modern computers are around. Chad Orzel, who's a physics professor at Union College, talks about how he was bemused by the sudden hand-writing hand-wringing [edit: fixed Freudian typo lol] from humanities colleagues around essay writing cheating because it's so much easier to cheat in a typical math/science exam, this is a Very Old (Sometimes Boring) Problem. People having to distinguish shitty information from good information in general is an Old Hard Problem. The prevalence of machine hallucination exposes that problem in deeply troubling ways, but it's not a new problem. I mean, Herodotus ffs.

If we're looking at compensation schemes vs. IP theft, sure we can look at how tons of works (I think something like ~80 of my works turned up in that Atlantic database of stolen written works) are stolen without permission and used for profit; but this then ties into how the entire compensation system for creative narrative work has been in hell mode for a long time. No one ever adequately squared the circle regarding DRM vs. ebook pricing vs. ebook piracy. If we're at generalized compensation for narrative/creative work, capitalism has a ton of specific problems in this space, but also at the point where the Nibelungenlied has a whole fucking shout-out to PLEASE PAY UR LOCAL MINSTREL KTHX, the general problem of compensation predates capitalism by centuries.
Edited Date: 2025-04-13 05:48 pm (UTC)

Date: 2025-04-13 06:04 pm (UTC)
sabotabby: (teacher lady)
From: [personal profile] sabotabby
I guess both?? Definitely scale—as you say, these are old, old problems. And grifting isn't new, of course, so the grift of "believe this software and not a professional" isn't unique to AI.

But I do believe in affordances, and there are certain tendencies that the technology does encourage, in the same way that affordances in, say, algorithmic social media will lend themselves to bad political thinking over good political thinking. And that's where the latter assertion, that AI causes sloppy thinking rather than allowing the people who would be sloppy thinkers in any event to get away with it, is also something that I believe to be true.

I will say there's substantially more wiggle room in the latter argument. I've been researching the moral panic around cellphones and social media (curiously, in education, this is considered a much larger problem than ChatGPT), and I think the kids who are addicted to social media and phones would probably, in earlier ages, done other things to avoid learning. But having access to social media and phones is also more distracting to me, an adult who didn't own a phone until I was 30. So while I do think there's a moral panic, I also do think that designing apps that work like slot machines to exploit loopholes in human cognition probably results in behaviours that wouldn't otherwise happen.

Likewise, I have a rough idea of the curve of students who are good at writing and interested in learning versus the ones for whom it's pure hell and who will look for any reason to avoid it. If it were purely a matter of "AI makes it easier for people to cheat and be stupid," you'd think that the latter group would be the ones doing it the most. But it's actually the middle to high achievers, who normally might struggle through a difficult task, who are giving up and resorting to ChatGPT. That can't be divorced from other material conditions—namely, grade inflation and economic instability—but I am seeing the students who might otherwise learn often doing far worse because of the affordances of the technology. Some of the richer ones would have traditionally bought term papers, but most wouldn't have had that option, so now there's a whole cohort of kids who are weaker thinkers than they might otherwise be.

Date: 2025-04-13 06:11 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
This is a much more interesting/stronger argument and I agree with large portions of it. That said, I think we're really now in the absolutely fucked (and out of my scope of knowledge) realm of economic incentives around the entire consumer tech industry and/or capitalism and/or economic instability for average citizens and kids. We're deffo long past the point of "we keep introducing tools (technological or cultural or otherwise) without having any idea, in an era of accelerating change, what the long-term cascading consequences are." Ironically at that point I tap out because I don't understand enough about global economics to even frame the question, and as someone with a background in activism, you're much better equipped to analyze that side of the problem. Sadly, my readings have been pretty narrowly focused on (bluntly) "how can I make things go KABOOM! in entertaining ways in a shitty commercial novel?"

One could make the meta-argument that kids are getting smarter at weaseling out of requirements, for good or ill. :wry: A South Asian physicist I know recounted the gatekeeping ~high school final exam that was basically necessary to pass in order to have ANY kind of future in their country. There was a girl who was a very weak student, just needed to check the box for this exam and get on with her life. This physicist (well, before they became a physicist) - the teacher took them aside and said, "I'm putting her behind you because otherwise she's going to fail." The girl copied the physicist's answers and came in 3rd in the class, which no one believed; but she was able to go on and live her life. Physicist's note: "She did have one excellent academic skill. SHE COULD COPY LIKE THE WIND." Obviously I wasn't there and I haven't been to this country for that matter, but in broad strokes I could well believe that what we have is someone end-running around a fucked SYSTEM so she could (with the aid and abetment of people also forced through the system) move on with her life.

Date: 2025-04-13 05:56 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
Yes: there's deffo an accessibility angle here. I know at least one legally blind person who uses AI tools to parse images and/or to generate alt text for images because it would be cost-prohibitive to hire a human to do that for them.

ETA: Actually, physical disability generally I'd consider a likely ethical use case. There was a point in time I could not draw a straight line or paint very well because I had a significant hand tremor caused by a medication side effect. As a microcosm of this, a lot of digital painting programs have a "stroke stabilization" setting as an aid for people with weaker hand-eye. I don't think this is a bad thing generally: yes, an artist who wants to do traditional media will work on that skill (or route around it), but for people who have physical limits around hand-eye, I don't have a problem with this myself.
Edited Date: 2025-04-13 06:15 pm (UTC)

Date: 2025-04-13 05:36 pm (UTC)
castiron: cartoony sketch of owl (Default)
From: [personal profile] castiron
That's one of the concerns that's been raised about the use of AI in processing medical imaging. The folks promoting AI say "The AI can go through all the images and diagnose all the easy and obvious stuff, and then the experienced human can look at the edge cases! It'll save the humans so much time and effort!" Except...the experienced human got that way by looking at hundreds or thousands of images and learning to tell unusal-but-healthy tissue from infected or cancerous tissue. If the AI takes that over, how is the brand new technician going to develop that skill?

Date: 2025-04-13 05:54 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
Yes: this is one of the really good cautionary arguments regarding AI use. I've seen Django Wexler (who used to work in software engineering) discuss this issue with regard to AI coding assistants too - the experienced engineers know how to winnow the chaff out of the AI output, but they got there by doing the work themselves.

Date: 2025-04-13 03:55 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
I've yet to see a single use case in the creative fields.

Patrick J. Jones says that for whatever reason, the visual AI generators are genius at color schemes.

ETA: Perhaps you wouldn't define this as a "creative field," but in professional Go/baduk/wei qi, the defeat of Lee Sedol by AlphaGo was hailed in that community BECAUSE AlphaGo made wild-ass moves that looked completely bonkers to the professional human players. There was a lot of excitement because literally the AI was making bonkers-to-humans moves that worked and this was hugely energizing for the potential to improve human play and understanding of Go strategy: Many top Go players characterized AlphaGo's unorthodox plays as seemingly-questionable moves that initially befuddled onlookers but made sense in hindsight:[72] "All but the very best Go players craft their style by imitating top players. AlphaGo seems to have totally original moves it creates itself."[68]

I have to agree with Eller that there are a lot of people whose interest is not being artists themselves in XYZ given field but "just" playing with output in a given discipline. Even someone who is, say, a professional weaver may not actually want to go all-in on learning to draw manga-style art for tinkering with TTRPG character portraits. One might look in music at precedents like 19th century Musikalisches Würfelspiel music generators; in literature, this really looks like it's not qualitatively different, only different in scale/extent, from some of the Oulipo experiments/works.

my evidence is that my students can't do any sort of advanced math or mathematical reasoning because people decided that rote learning is bad

Counterpoint: I remember the entire "students shouldn't have calculators because it'll make them bad at math." But computational methods are hugely important in e.g. numerical methods in calculus or computer programming. I'm not materially convinced that having to statistically crack Vigenere ciphers by hand (not even a four-function calculator) improved my understanding of the number theory involved. Hell, while we're at it, studies strongly suggest abacus use doesn't materially improve conceptual math understanding and often "manipulatives" can lead to rote muscle-memory manipulation without improving understanding of the underlying mathematical systems. I think this is a much stickier problem generally than "AI is bad" can explain.

Regarding "rote learning is bad" - again, this is a much stickier problem in math pedagogy. You can include rote learning as a prerequisite for many kinds of foundational math; it can be necessary but one can still have people who can handle the rote learning and flunk out of conceptual understanding, see Liping Ma's Knowing and Teaching Elementary Mathematics for an absolutely devastating critique of US elementary mathematics teaching in this regard (with comparisons to similar practices in China where the teachers have less formal education but use teaching methods that tend to produce better math understanding conceptually, starting with basics like "well, what IS place value and why do we care?"
Edited Date: 2025-04-13 04:06 pm (UTC)

Date: 2025-04-13 05:10 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
I'm not good enough at math myself to weigh in—I just know that the discovery method or vibes-based math education they're using here has resulted in students who are increasingly unable to think mathematically or understand statistics. Not that I was any good at it, or the rote memorization stuff, but they are worse than I am. To the point where they can't see a problem, figure out what they'd have to do to find an answer, and use the calculator in their phones to calculate it. The second there's a number, they turn their brains off.

I don't get TTRPG or other hobby use of GenAI either, to be honest. For me, part of the joy in that is community and connection, it's one of the few things in life that remains free, and no one cares if you're good at it or not. My GM is always using AI stuff and it drags me immediately out of immersion because the style is so generic and hateful and I can't help but think of the environmental cost.

Date: 2025-04-13 05:35 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
As someone with an M.A. in math pedagogy (if that even matters anymore), I think it's likelier that they would have been shit at it in a traditional rote memorization context but the failure modes would have been different and would have presented differently. Every generation in the US (speaking from the context I know) has had some random-ass math reform that attempts to be the magic fix ("New Math" - I still think Venn diagrams should be near-mandatory although most civilians are not going to need an axiomatic system introduction to associativity; CPM; Cuisinart rods and manipulables; Soroban and abacus; graphing calculators). The truth is that there isn't a generalized magic fix to "make people learn to think mathematically." The failure modes keep changing and the magnitude of the failure modes change as well. But fundamentally, the problem (in the US) is that engaging people in genuine mathematical problem solving is pedagogically non-trivial and does not lend itself well (if at all) to the kinds of multiple-choice assessments that have taken over in the educational system here.

Sure, e.g. South Korea has consistently had much better results in math education. But South Korea also has elementary schoolers signed up for hagwon / night school classes and sometimes doing up to 16 hours of study a day stressed out of their everloving minds because of the examination system. I don't know that I consider that an educational win either.

I don't get TTRPG or other hobby use of GenAI either, to be honest. For me, part of the joy in that is community and connection, it's one of the few things in life that remains free, and no one cares if you're good at it or not.

GMing is a lot of work; AI-generated dungeon text is different in scale but not in kind from the GM rolling off ten different dungeon random encounter tables. If I had worked as 60-hour week and was tired and we only had three hours for a session, I'd be looking for shortcuts too.

Or people who don't have a gaming group for whatever reason. Solo journaling RPGs do exist and some are terrific but again, we're now looking at a difference in scale rather than kind.

Date: 2025-04-13 05:41 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
It's complicated, and I have dyscalculia and probably never would have been good at math. But at least I learned enough, back in the day when rote learning was mixed with problem solving, that I can kind of figure out what the problem is. I think there's some middle ground between whatever the US is doing, what Canada's doing, and what South Korea is doing.

Weirdly, the GM isn't using AI-generated text. It's just images, which are completely unnecessary and just for flavour. He's got two artists in the group. But also, I just don't regard any of these expressions—art, writing, gaming—as needs, so when I weigh it against the ethical horrors of AI, I just kinda go, "well, no one made you do it."

Date: 2025-04-13 06:28 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
I would argue from a design standpoint that 70% of any game is "game feel" or "flavo(u)r," but that's a separate argument. (Edit: "70%" is a shitty made-up statistic.)

It's not that I don't think there aren't ethical considerations around AI (licensing, resource use, monetization and interaction with venture capitalism and commodity tech). It's that I find a number of the arguments in this space are (as detailed in other comments here) frustratingly ill-formed, alongside the ones that worry me (generalized fucked human attention spans caused by, as you say, affordances; "where do people get entry-level experience to get to higher levels of experience if no one does entry-level anymore?" raised by [personal profile] castiron) weighed e.g. against AI as accessibility aid. I know of someone who uses AI aids for writing as a government-permitted accommodation for weapons-grade dyslexia (not in the US or Canada).
Edited Date: 2025-04-13 06:28 pm (UTC)

Date: 2025-04-13 04:50 pm (UTC)
castiron: cartoony sketch of owl (Default)
From: [personal profile] castiron
And it doesn't help the discussion when all sorts of different machine learning/AI are lumped in with generative AI.

Genealogy is one of my hobbies, and the use of AI to index millions of pages of handwritten records has been a huge advance. No one has the money to pay enough people to go through and transcribe/index all these documents, and even if they did or even if there were enough volunteers, it would take decades. Now, they're indexed, and I can search them and find relevant land or court records that I would've had to slog through page by page (if I even knew that they were relevant, because the person I'm looking for wouldn't have been in that deed book's index). Just this weekend I found the probate record for my 4x-great-grandfather that confirmed he was indeed my 4xggf and also gave names for four siblings of my 3xggf who I hadn't known about.

That's what I see as AI done right. The AI is being used to recognize and interpret handwriting and to index records in order to make them easier to find. While it's also being used to provide a transcription, the original record is shown alongside it, so I'm not dependent on the AI's transcription; I can read the original and decide for myself if it's relevant. I'm still the one doing the genealogy part; the AI's only acting as a helpful tool.

Date: 2025-04-13 05:36 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
No, and this is one of my pet peeves. I have friends who use machine learning in medical fields and there are some use cases there (though I think people need to be more careful and critical about it). But it's lumped in with ChatGPT, which as far as I can tell is not useful at all.

Date: 2025-04-13 05:55 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
...because at a base level, the underlying math is very similar. That would be the tech/math reason to classify them together.

Edit: I'm trying to figure out your rationale for this, because right now it looks like you're fictively categorizing different underlying mathematical/algorithmic types of machine learning/AI based on whether you approve of them or not (on ethical/other grounds) because I'm not seeing your explanation for a substantive categorical distinction based on algorithmic mechanism.
Edited Date: 2025-04-13 06:21 pm (UTC)

Date: 2025-04-13 06:29 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
They're both sorting through large amounts of data to detect patterns, but one type is claiming to...detect patterns. With the other one, the marketing claim is that it produces something new (hence the generative distinction).

That said, it's worth categorizing technology on ethical grounds. Is the harm caused by the technology outweighed by the good that it does? If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified (although automating cancer diagnosis without trained professionals who could identify errors is not wise, and we should also question where the data comes from, so it's not without ethical concerns). If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified. If it's used to identify a bombing target in a high-rise building that results in the death of hundreds, the water use is the least of our problems. And these are all separate and important discussions to have.

Date: 2025-04-13 06:41 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
We need to nail down a definition of "new," then. Are we talking purely about combinatorics or some higher de novo or ex nihilo standard? Because in sf/f writing (allegedly) by humans alone, the SCREAMING majority of sf/f SCREAMINGLY fails the higher bar, and I include myself. Music tends to be hard mode here in terms of combinatoric limitations: even if we restrict ourselves to Western tonal music, you fundamentally...only have...twelve notes...in the chromatic scale.

Is the harm caused by the technology outweighed by the good that it does?

Then we need to nail down a way to quantify "harm" vs. "good" at the point where you're proposing a measurement, because naive analyses are likely to be badly misleading here. Kind of like the entire "paperless workflows will save resources" or the people who think that information "stored in the cloud" lives in the magical aether unicorn dust noösphere because obviously servers and software and connection uptime don't cost time and resources.

If AI is able to better diagnose cancer, I think you can make a moral argument that the water use is justified

Devil's advocate: suppose that we hit a breakpoint of cancer-diagnosis AI specifically using up so much water that agriculture in [whatever] locations is impacted, resulting in food scarcity and irreversible nutritional deficiencies in [NUMBER] children. What then?

If the AI generates a sexy Garfield with big naturals or a movie script, it's not justified.

Devil's advocate: what if sexy Garfield (...Garfield the orange cartoon cat?! I'm not going to Google for, uh, alternate sexy Garfields) movie is so screamingly profitable that the rights holder, who has a family member with XYZ rare cancer, donates 50% of the proceeds to cancer research, leading in 13 years to a cure for XYZ rare cancer? I would not bank on this specific scenario (although I did know a movie script writer who donated the bulk of his take to a spina bifida charity). But the truth is that if we're doing a cost-benefit analysis, we can't get a true idea of costs or benefits by stopping at "sexy Garfield is a stupid idea," we really are stuck with tracing out the (likely, assumed) second- and third-order consequences. And that turns into hell mode for analysis pretty rapidly.

What if Sexy Garfield inspires someone who's so disgusted by the entire enterprise that they invent a (fake, sci-fi) supervirus that wipes out all AI? What if Sexy Garfield bombs so badly that everyone agrees that "wow, sexy Garfield was stupid and AI is a waste of time" and all the venture capitalists pull out and there's no longer financial incentive for AI? What if Sexy Garfield is so popular and the venture capitalists so venturely capitally that they innovate on a new sustainable form of AI? We're really in the realm of me, a random no-credential sci-fi writer spitballing, but the fact is real life is full of weirdo cascading consequences and at least some of them can be predicted, calculated, estimated, prepared for with some spread of assumed/guessed probability.

Date: 2025-04-13 07:15 pm (UTC)
sabotabby: (books!)
From: [personal profile] sabotabby
Honestly, I think those kinds of thought experiments start to get into Effective Altruist/Rationalist territory. Eventually you get to "well what if AI could solve all of humanity's problems and upload us into a utopian singularity would it be worth it then" and that is just not useful on a policy level, because it's fantasy. We need to look at real consequences now, real places experiencing drought where these companies are building data centres, real economic damage caused by VC speculation, real layoffs because middle managers can't distinguish between good and good-enough news reporting, real misinformation, real dead kids in Gaza. Material reality, not what ifs.

I don't think defining "new" is nearly that hard. Is the thing generating a report on where it finds a pattern in medical data, or is the thing pretending to be a poet?

Date: 2025-04-13 06:32 pm (UTC)
castiron: cartoony sketch of owl (Default)
From: [personal profile] castiron
A commenter on another site referenced (but didn't cite) an effort to train an AI to recognize tuberculosis in chest X-rays; at first the AI seemed to be doing a great job, but someone realized that part of what the AI was picking up on was the metadata giving the age of the X-ray machine -- an older X-ray machine implied a poorer region and correlates with regions where TB's more common.

Now, that's something the researchers can fix now that they've realized it, and the AI may still be able to do a good job once the algorithm's been retrained, but it's not an instant magic solution.

I know some academics who are required to give their university an annual report on what they've done during the year. Often the report has a very specific format and takes a few hours to put together from scratch; they've sped up the process of creating that report by feeding the basic information into ChatGPT and getting back a draft that they can then quickly clean up. On the one hand, that's a use of it that's benefitted them; on the other hand, arguably it's a waste of time to have them do the report in the first place, or to have the faculty member put it together rather than having a departmental admin to do the job.

Date: 2025-04-13 03:46 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
Eller - feel free to email me. I've been away from DW for a bit due to health stuff but I'd be happy to help with structure stuff here. I might be slightly more helpful than ChatGPT. :p

Date: 2025-04-13 04:23 pm (UTC)
yhlee: Alto clef and whole note (middle C). (Default)
From: [personal profile] yhlee
No worries! :) It sounds like a terrific fun personal project, hope you find something more helpful than ChatGPT for personal stuff! :)

Date: 2025-04-13 05:11 pm (UTC)
shivver: (azicrow)
From: [personal profile] shivver
Totally not going to get involved in the "is AI good or bad" discussion. I have a lot of opinions that wouldn't be helpful. :)

But, as far as the American-centric attitude of ChatGPT, that's entirely expected. An LLM can only draw on the information it's been fed, and as ChatGPT comes from American researchers, it's probably been fed mostly English-language and American data. If you can find a German or European LLM, it'll probably serve you a lot better.

Date: 2025-04-13 07:20 pm (UTC)
aliax_alexandre: (Default)
From: [personal profile] aliax_alexandre
Holy shit D: ! I'm so horrified, I don't even know where to start...

No, wait, I do know. I have to admit that this bit:

flag any character behavior for editing that does not conform to American cultural standards.
- The AI told me that my characters are too obsessed with honor and duty


made me snort something awful, considering how over the years, I've come across so many Americans who liked to explain how American culture is supposedly so steeped in the cultivation of honour and duty :P

But the rest... Wow. So far, I haven't been interested in using AI, and this certainly doesn't make me feel like I'm missing out on anything :D

Profile

eller: iron ball (Default)
eller

December 2025

S M T W T F S
 1 23456
78910111213
14151617181920
21222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 6th, 2026 10:31 pm
Powered by Dreamwidth Studios