Caribou Gear Tarp

Artificial Intelligence and Public Comment

Resurrecting this because last night I read this paper which came out 3 days ago. It's short, but interesting.


"We found no qualitative difference between AI and human-generated creativity, although there are differences in how ideas are generated. Interestingly, 9.4 percent of humans were more creative than the most creative GAI, GPT-4. "

In the paper, they do kind of question what it means to be truly creative, as these AIs require a prompt - an ask - before they will create. When you have created things, or when I have, is it "prompted"? It doesn't feel like it, but I don't know if that matters. I think we are in the middle of a highly disruptive event - as disruptive as the internet was - but it's tough to yet tell how.
I was reading an article in Forbes yesterday about AI. They mentioned a couple that generate art from a prompt, so I got online and played around with them. The results look like any other digital or stylized artwork you see everywhere in print and digital media. Now I wonder…are there artists creating these, or is someone simply putting a prompt in and picking one of the outputs? Even I can create media-quality digital art now.

Creativity, indeed.
 
I just started using ChatGPT because I heard it could help me do my job. Been at it for a week now and its mind blowing how much I'm using it at work saving me time and being more efficient. I'm about to present this to my boss and how we can use it to drastically increase productivity but I'm a bit hesitant because it will mean that rather than having a team of 30 or so people, we likely can cut that down to 20-25 as a result of fully embracing it. Scary stuff.
 
I haven't used the newest iteration of GPT (4), because it costs a subscription, but the fact that it is a vast improvement from 3 is nuts.

I can imagine a very near future where someone prompts it to : "Make the case for transferable elk permits in Montana", and it will do so better than any human we have ever seen. And opponents will prompt it to generate a rebuttal, and that will be better than any case against transferable elk permits we have ever seen. Even under the premise that we could verify public comment is a human, the content of the comments themselves are destined to not come from humans. They will be novel, and persuasive, and well written - and they won't be from a person.

This will happen around all sorts of manners of subjects. It's a disturbing power, given how human brains work - in that human brains usually decide their position on an issue not due to the logic of a thing, but how that thing makes them feel. They then rarely deviate regardless of how the logic pans out.
 
I asked a programmer friend about it and he told me there is a feedback option where you can paste in text and ChatGPT will give a percentage chance the text was written by the program. It was apparently used recently to catch university students submitting ChatGPT written papers.
Spelling errors
 
I asked a programmer friend about it and he told me there is a feedback option where you can paste in text and ChatGPT will give a percentage chance the text was written by the program. It was apparently used recently to catch university students submitting ChatGPT written papers.

I think over time, this will be an arms race won by the AI.

1679926941025.png
 
I have heard and read that 4 is much better. The speed of improvement is amazing. The implementation of the technology is incredibly deflationary. Why would you hire a junior analyst to do anything? And if you don't hire for entry-level positions, eventually you don't have senior-level people either.

 
I haven't used the newest iteration of GPT (4), because it costs a subscription, but the fact that it is a vast improvement from 3 is nuts.

I can imagine a very near future where someone prompts it to : "Make the case for transferable elk permits in Montana", and it will do so better than any human we have ever seen. And opponents will prompt it to generate a rebuttal, and that will be better than any case against transferable elk permits we have ever seen. Even under the premise that we could verify public comment is a human, the content of the comments themselves are destined to not come from humans. They will be novel, and persuasive, and well written - and they won't be from a person.

This will happen around all sorts of manners of subjects. It's a disturbing power, given how human brains work - in that human brains usually decide their position on an issue not due to the logic of a thing, but how that thing makes them feel. They then rarely deviate regardless of how the logic pans out.
Ultimately the decision on making transferable elk permits in Montana may be made by AI, take out all human emotions. Spock will be impressed.
 
@Nameless Range Thanks for bringing this to light. I also work for a very large tech firm and ChatGPT is just the tip of the iceberg on where technology is headed. Most people don't know the real capabilities of technology that is available right now. Makes a sci-fi movie pale in comparison.
 
Figured I would bump my initial response within the "Starlink" thread over here as my tin foil Bailey cowboy hat was growing beyond the Starlink intent... Haha!

I don't have starlink but I saw the satellites cruise over head tonight. Not knowing what they were at first, I was a bit creeped out.
A friend who works with JPL (Space/NASA - JPL - Jet Propulsion Laboratories) shared the link (bottom of post) and said this is likely the most up to date site for public information regarding satellites. Yes, creeped out is my thought as well though it's going to expand immensely as Artificial Intelligence continues to learn upon itself.
We, humans, have learned how to create the world wide web (WWW - Credit to Al Gore, of course... :ROFLMAO: ) that, in a fraction of a second (internet speed based) we have a response for a mass amount of world disseminated information specific to our search criteria... It's the leap into AI ability that is a continuous expanding web out to every known database.

Picture the extreme million times greater ability of AI learning from itself... There's a dedicated U.S. government facility of mass AI dedicated computation computers in an area where no electronic devises may be brought within "x" distance of the facility because the AI's learning capabilities upon itself are faster than humans can stay ahead of it's ability.
It's beyond and I get it... many people dismiss the tin foil shared info though, knowing people involved - for myself, the tin foil is my straw "Bailey" cowboy hat. Haha!

Returning from my "extremist info", here is an amazing viewof current human items orbiting our world:


1684587257250.png

This is the a view of active orbit human satellites:

1684587275110.png

Credit:
 
Last edited:
This doesn't extend so much into the implications of using AI for public comment, but something I've been thinking about as it pertains to the experience of thinking and being human.

I just finished up grading all of my 8th grade English students' end-of-term argumentative papers. For these papers, students had to conduct research and write an argument in the form of a letter which they’d advocate for change in an issue of importance to them, and address and send that letter to a person or organization with the power and influence to affect said change. Many of them were excellent, and a few of them were real stinkers. This isn't unusual. What was unusual this year though was that, for the first time in my teaching career, three of these papers were without a doubt written by AI. They were immaculately composed, articulate, used a level of vocabulary and syntax that is extremely uncommon among 8th graders, and were completely devoid of personal touch—the sort of human quirkiness which is infused in every writer’s authorial voice, whatever that may look like for any given individual. I have no way of 100% proving these kids cheated, but there's a pile of circumstantial evidence that goes beyond my having gotten to know them as thinkers and writers over the course of the year, so I know.

But my real concern, beyond just the cheating, is that these kids are at the vanguard of what will surely be a deluge of AI generated papers coming in from here on out, and what gets lost when students (or any of us) choose to push the easy button on complicated, challenging, but worthwhile tasks. When students choose to do that, they don't develop the critical thinking skills or grit to push through difficulty. Of course, it’s really difficult for 13 and 14 year olds to see that what they are learning to do in class is actually building life-long essential skills, despite many adults in their lives trying to connect those dots for them. I would also argue that adults lose the same sort of growth and independence that comes from struggling through challenge as well. But by turning to machines to do our thinking for us, there’s this other thing we lose that is less tangible and kind of difficult to articulate. These three papers I read were exceptionally polished and well-reasoned, but they lacked, I don’t know. They lacked soul. And it bummed me out. Maybe that it melodramatic, but damn it feels true.

I honestly believe that these leaps in AI are going to drastically increase the speed in which we are separated from the essential nature of what makes us human.
 
This doesn't extend so much into the implications of using AI for public comment, but something I've been thinking about as it pertains to the experience of thinking and being human.

I just finished up grading all of my 8th grade English students' end-of-term argumentative papers. For these papers, students had to conduct research and write an argument in the form of a letter which they’d advocate for change in an issue of importance to them, and address and send that letter to a person or organization with the power and influence to affect said change. Many of them were excellent, and a few of them were real stinkers. This isn't unusual. What was unusual this year though was that, for the first time in my teaching career, three of these papers were without a doubt written by AI. They were immaculately composed, articulate, used a level of vocabulary and syntax that is extremely uncommon among 8th graders, and were completely devoid of personal touch—the sort of human quirkiness which is infused in every writer’s authorial voice, whatever that may look like for any given individual. I have no way of 100% proving these kids cheated, but there's a pile of circumstantial evidence that goes beyond my having gotten to know them as thinkers and writers over the course of the year, so I know.

But my real concern, beyond just the cheating, is that these kids are at the vanguard of what will surely be a deluge of AI generated papers coming in from here on out, and what gets lost when students (or any of us) choose to push the easy button on complicated, challenging, but worthwhile tasks. When students choose to do that, they don't develop the critical thinking skills or grit to push through difficulty. Of course, it’s really difficult for 13 and 14 year olds to see that what they are learning to do in class is actually building life-long essential skills, despite many adults in their lives trying to connect those dots for them. I would also argue that adults lose the same sort of growth and independence that comes from struggling through challenge as well. But by turning to machines to do our thinking for us, there’s this other thing we lose that is less tangible and kind of difficult to articulate. These three papers I read were exceptionally polished and well-reasoned, but they lacked, I don’t know. They lacked soul. And it bummed me out. Maybe that it melodramatic, but damn it feels true.

I honestly believe that these leaps in AI are going to drastically increase the speed in which we are separated from the essential nature of what makes us human.
OpenAi released an app that allows you to feed in text and determine the probability that it was generated by AI. I read an article about it in The Economist earlier this week.

How do you intend to approach your students about it?
 
OpenAi released an app that allows you to feed in text and determine the probability that it was generated by AI. I read an article about it in The Economist earlier this week.

How do you intend to approach your students about it?
I ran the papers through the AI detector and they all came back as more than 96% likely to have been AI generated.

My principal has advised me not to press the issue with these particular students because of factors that I won’t go into here. But I have already started putting together ideas for a short unit for the beginning of next year that will examine the issue and hopefully demonstrate to kids what they lose by turning to AI—and to show that their teachers are not as dumb as they think—as well as how I will address the use of AI for class assignments with parents. Brave new world.
 

Forum statistics

Threads
114,009
Messages
2,041,035
Members
36,429
Latest member
Dusky
Back
Top