Caribou Gear

Artificial Intelligence and Public Comment

Although we have turned the page on this thread to be dominated by memes, here is an interesting study from MIT (still in working status) regarding writing that might highlight the end effect of @rtraverdavis problem. For the TLDR crowd, the AI made the best writers better, the bad/average writers much better, and everyone got faster.

Abstract
We examine the productivity effects of a generative artificial intelligence technology—the assistive chatbot ChatGPT—in the context of mid-level professional writing tasks. In a preregistered online experiment, we assign occupation-specific, incentivized writing tasks to 444 college-educated professionals, and randomly expose half of them to ChatGPT. Our results show that ChatGPT substantially raises average productivity: time taken decreases by 0.8 SDs and output quality rises by 0.4 SDs. Inequality between workers decreases, as ChatGPT compresses the productivity distribution by benefiting low-ability workers more. ChatGPT mostly substitutes for worker effort rather than complementing worker skills, and restructures tasks towards idea-generation and editing and away from rough-drafting. Exposure to ChatGPT increases job satisfaction and self-efficacy and heightens both concern and excitement about automation technologies.

 
i dunno why it occurred to me until just freaking right now to use chatgpt to write some VBA script for me.

recording macros is great and easy, but often you need to tweak them, or it's just not quite doing a step like you wanted it too. i see great usefulness here and i think it's a testament to my stupidity that i'm just realizing this possibility right now. early results were flawless when i threw it a softball to see what it did.

1685032046789.png
 
This doesn't extend so much into the implications of using AI for public comment, but something I've been thinking about as it pertains to the experience of thinking and being human.

I just finished up grading all of my 8th grade English students' end-of-term argumentative papers. For these papers, students had to conduct research and write an argument in the form of a letter which they’d advocate for change in an issue of importance to them, and address and send that letter to a person or organization with the power and influence to affect said change. Many of them were excellent, and a few of them were real stinkers. This isn't unusual. What was unusual this year though was that, for the first time in my teaching career, three of these papers were without a doubt written by AI. They were immaculately composed, articulate, used a level of vocabulary and syntax that is extremely uncommon among 8th graders, and were completely devoid of personal touch—the sort of human quirkiness which is infused in every writer’s authorial voice, whatever that may look like for any given individual. I have no way of 100% proving these kids cheated, but there's a pile of circumstantial evidence that goes beyond my having gotten to know them as thinkers and writers over the course of the year, so I know.

But my real concern, beyond just the cheating, is that these kids are at the vanguard of what will surely be a deluge of AI generated papers coming in from here on out, and what gets lost when students (or any of us) choose to push the easy button on complicated, challenging, but worthwhile tasks. When students choose to do that, they don't develop the critical thinking skills or grit to push through difficulty. Of course, it’s really difficult for 13 and 14 year olds to see that what they are learning to do in class is actually building life-long essential skills, despite many adults in their lives trying to connect those dots for them. I would also argue that adults lose the same sort of growth and independence that comes from struggling through challenge as well. But by turning to machines to do our thinking for us, there’s this other thing we lose that is less tangible and kind of difficult to articulate. These three papers I read were exceptionally polished and well-reasoned, but they lacked, I don’t know. They lacked soul. And it bummed me out. Maybe that it melodramatic, but damn it feels true.

I honestly believe that these leaps in AI are going to drastically increase the speed in which we are separated from the essential nature of what makes us human.
@SAJ-99 already beat me to it about calculators. Pretty difficult to predict we would all have smart phones 30 years ago, but here we are and all those math teachers that told me I wouldn’t have a calculator were, for the majority, wrong. Not faulting them for anything just showing that I think the biggest challenge as educators is and maybe always has been educating for a world in which we grew up in but not realizing (or more likely not knowing what it will be) the world in which the students will grow up in will not be the same. Same can be said for handwritten vs typed. When computers first came on board all my classes required a handwritten draft first. As I got into high school and a decade or so into the computerization of the 90’s and early 00’s that faded and most just required a draft (that could be typed). Fast forward to today and I’d imagine there are entire assignments that are never even printed. Track changes and emailing to professor negates this (I still prefer to read hard copies).

What about teaching them how to utilize AI to write papers and then how to edit them such that they won’t be detected by the scanner? Might serve them better? Then send them onto Hunt Talk so @Nameless Range can make sure they write for the hunters and not for the politicians in submitting a bunch of chatGPT public comments 🤣.

I totally get what you are saying, and don’t disagree with your heart to keep students human and have them learning and truly doing an assignment, but just like smart phones have impacted social development this is going to impact many areas so I just think we often don’t look at what will be but rather how to preserve what has been if that makes sense?

Only other thought would be to greatly shorten the assignments and have it all done in class and handwritten (blue book style). If your not going to be allowed to enforce this then it’s only going to cause the other students who did the work to have angst and eventually cave in when a Friday night with friends comes up.
 
Last edited:
i dunno why it occurred to me until just freaking right now to use chatgpt to write some VBA script for me.

recording macros is great and easy, but often you need to tweak them, or it's just not quite doing a step like you wanted it too. i see great usefulness here and i think it's a testament to my stupidity that i'm just realizing this possibility right now. early results were flawless when i threw it a softball to see what it did.

View attachment 277279

i just realized i said "please" to the chatbot 🤦‍♂️

i dunno, maybe they code it to be nicer to nice people 🤷‍♂️
 
what did you do with those three papers?

my brother teaches 12th grade english and 12th grade AP english and literature. he's come across a handful of AI submissions already. one of them was obvious, the kid submitted the whole thing with the prompt typed to ChatGPT at the top still there.

no doubt there is usefulness of AI in school work and real work, but obviously not for having something else do your school work. my brother basically tossed them back and said these are obviously chatgpt, you get a zero.
I address that in Post #179. It’s complicated. I’m going to have a heart-to-heart with each of these students, even though I’ve been instructed not to throw their papers out like your brother was able to.
 
@rtraverdavis have you considered in class writing? Everything must be written in the classroom?
I’m considering making it so that all rough drafts must be written on paper in class, and that final drafts will not be accepted without first completing and submitting those hand written rough drafts. That way there is evidence of writing progression, which is useful anyway.
 
i just realized i said "please" to the chatbot 🤦‍♂️

i dunno, maybe they code it to be nicer to nice people 🤷‍♂️
I'm so programmed to say "thanks" or some sort of positive feedback when someone does good work that I almost always respond with something like that before I remember I'm talking to a computer, it's interesting that I almost automatically feel like I should treat it as a conscious thing...
 
I address that in Post #179. It’s complicated. I’m going to have a heart-to-heart with each of these students, even though I’ve been instructed not to throw their papers out like your brother was able to.

ah see that now. that makes a lot of sense and is a sensible approach.

i'm sure the only reason my brother can treat it that way is because you kinda start treating kids more like college students in 12th grade and therefore if you're gonna cheat you're gonna get a zero, end of discussion.

especially AP, though i don't think those incidents happened in his AP classes. i suspect he also spent some time talking about AI and how it can be useful versus just turning in what is equivalent to plagiarism.
 
I'm so programmed to say "thanks" or some sort of positive feedback when someone does good work that I almost always respond with something like that before I remember I'm talking to a computer, it's interesting that I almost automatically feel like I should treat it as a conscious thing...

kinda makes me think of when i'm walking into the rec center by my office and after checking in the girl at the front desk says "have a good workout!" and i'm like "you too!"
 
I think the biggest challenge as educators is and maybe always has been educating for a world in which we grew up in but not realizing (or more likely not knowing what it will be) the world in which the students will grow up in will not be the same.
This is an excellent point, and you and several others here have given me quite a few things to think about. I tend to have either have a healthy dose of skepticism to downright repulsion toward any new change that feels like it’s pulling us away from some essential part of being human. I get the calculator analogy but this new, exponential pace of technological advancement and the sort of omnipotence of AI in particular seems to have no other parallels. At least as fall as I can see. Maybe the printing press or motorized travel.

But as a friend pointed out to me today (from across the country, through a voice app on my phone) maybe the only essential part of being human is adapting to constant change. Maybe I’m just now old enough to feel it, and it’s damned uncomfortable. And I’m fully aware of the irony of talking about all this over the internet with a bunch of folks I’ve never met in person but feel like I know fairly well.

I need to do some more reading about how AI can be beneficial to our thinking and creativity. We’ll see. Thanks @SAJ-99 for posting that study.
 
Written by my brother, from a rock hard conservative viewpoint, his opinion piece on Fox News (immediately turning off progressive/liberal members of HuntTalkers) urges conservatives to understand implications of AI. The same issues could be raised if you consider yourself progressive.....depending on how the Artificial Intelligence code is written.


We are entering the world of seeing isn't believing. Witness the viral Trump/Clinton duet:

 
The latest headlines have been that experts warn that AI could lead to human extinction. From the letter published yesterday...'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” What I haven't read is what these experts think may happen. How would AI kill us? All of this is beyond my comprehension. Does anyone know what the dangers are?
 
The latest headlines have been that experts warn that AI could lead to human extinction. From the letter published yesterday...'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” What I haven't read is what these experts think may happen. How would AI kill us? All of this is beyond my comprehension. Does anyone know what the dangers are?
Arnold knows.
 
The latest headlines have been that experts warn that AI could lead to human extinction. From the letter published yesterday...'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” What I haven't read is what these experts think may happen. How would AI kill us? All of this is beyond my comprehension. Does anyone know what the dangers are?
I read where misinformation will be used by bad parties, and since its "intelligence" will be greater than humans it will be very effective and detrimental. Two differing views on reality has already got this country divided up.
 
I read where misinformation will be used by bad parties, and since its "intelligence" will be greater than humans it will be very effective and detrimental. Two differing views on reality has already got this country divided up.

In that vein:


The article above is about how AI will be leveraged to divide and enrage, but I believe we are approaching a crisis of narrative. Do you trust politicians? Are you going to trust that video of a (insert antagonistic country here) fighter jet shooting down one of our own? In a moment requiring instant reaction, are our leaders going to be able to discern truth from concoction when fabricated audio/visual data is indiscernible from the real? Those videos of riots. Those recordings of cops killing someone on the street. Those articles as compelling as anything ever written in the name of X. In politics, the "other guy" saying something abhorrent?

It's Strange New World stuff but doesn't require any sort of magic to come to fruition, and I don't know how we navigate it.
 
Last edited:

Latest posts

Forum statistics

Threads
113,615
Messages
2,026,752
Members
36,245
Latest member
scottbenson
Back
Top