Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Pretty sure we were are on a good pace to do the extinction thing without help from AI. We just have to keep in ming that people get more attention if they say more shocking things. A headline that says "The future with AI...meh" doesn't get clicks. And of course politicians need people to be scared about something, so they eagerly jump aboard the train.The latest headlines have been that experts warn that AI could lead to human extinction. From the letter published yesterday...'Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” What I haven't read is what these experts think may happen. How would AI kill us? All of this is beyond my comprehension. Does anyone know what the dangers are?
I would argue that just as often, politicians benefit from fostering an attitude of "meh" as much as they do, " holy shit we should probably do something."Pretty sure we were are on a good pace to do the extinction thing without help from AI. We just have to keep in ming that people get more attention if they say more shocking things. A headline that says "The future with AI...meh" doesn't get clicks. And of course politicians need people to be scared about something, so they eagerly jump aboard the train.
All to often have we have gotten the mis/disinfo tag that didn't age well. That has done much to foster the meh malaise. We either don't believe according to predisposed tribalism or we wait til another shoe falls....or did it fall? Add AI to the growing list of information shading to disambiguate.I would argue that just as often, politicians benefit from fostering an attitude of "meh" as much as they do, " holy shit we should probably do something."
To be clear though, I'm not really arguing for any future I think is likely, other than that of deep-fakes being indiscernible from reality and position statements written by nonhumans causing us real problems in terms of how we engage in sense-making.
All to often have we have gotten the mis/disinfo tag that didn't age well. That has done much to foster the meh malaise. We either don't believe according to predisposed tribalism or we wait til another shoe falls....or did it fall? Add AI to the growing list of information shading to disambiguate.
All to often have we have gotten the mis/disinfo tag that didn't age well. That has done much to foster the meh malaise. We either don't believe according to predisposed tribalism or we wait til another shoe falls....or did it fall? Add AI to the growing list of information shading to disambiguate.
Have you ever listened to members of The Freedom Caucus? Nothing is 'meh' and everything is the end of civilization. Not a whole lot different than the Progressive Caucus. This is what happens when we elect politicians that have zero experience in any relevant field making laws and trying to figure out the effect of those laws. It devolves into figuring out what name gets put on the Pay-To-the-Order-Of line for US government checks. AI will be no different. Politicians have zero knowledge of the space and an equal ability to effectively regulate. AI is so new, people can claim to be an expert if they have the proper narrative for the specific cable channel.I would argue that just as often, politicians benefit from fostering an attitude of "meh" as much as they do, " holy shit we should probably do something."
We are already there. To be optimistic, humans are very adaptable. Maybe we will start thinking for ourselves again?...deep-fakes being indiscernible from reality and position statements written by nonhumans causing us real problems in terms of how we engage in sense-making.
We are already there. To be optimistic, humans are very adaptable. Maybe we will start thinking for ourselves again?
Maybe. But social media is popular because it gives everyone a voice and audience.I wonder if people will start to distance themselves from technology and social media? I’ve been thinking a lot lately about trying to get away from Facebook. Maybe get rid of my smartphone. They are both such a waste of time. I need to read more books, spend more time outside, exercise more, and countless other more productive things. I say all of this as I’m typing from my iPhone.
But it has to be this way in a capitalistic market where you need to out compete your competition in garnishing or maintaining more viewers.I agree. It's tough for anyone to sift through the takes. Over decades, but in particular the last few years, "authorities and experts" burned through a hell of a lot of Social Trust Capital, succumbing to some weird form of audience/funding capture.
But how different is that than typical talk radio? Not everyone listens to it because it's mostly garbage and most people recognize that. There's some hope.Maybe. But social media is popular because it gives everyone a voice and audience.
And that’s the problem. The vast majority of folks don’t need to have a voice.Maybe. But social media is popular because it gives everyone a voice and audience.
I was going to add that but took it out.And that’s the problem. The vast majority of folks don’t need to have a voice.
Hal, is that you?
Didn’t mean to ignore this, just wanted to think about it. This is what I would guess. Most radio shows have producers that screen calls. The bat-shit crazy never gets on air. This isn’t true in the internet. On the internet/social media, the levels and nuances of crazy self-stratify into “like” groups. HT is no different. We all may disagree on topics, but when something truly crazy gets posted it gets called out. No one is immune. But HT is tame. You can go to “dark web” and see some truly crazy stuff. We just hope they are and remain the minority.But how different is that than typical talk radio? Not everyone listens to it because it's mostly garbage and most people recognize that.
I am reading a book called “The Alignment Problem” by Brian Christian that explores this issue in depth. It is very well written. The frequency with which AI programs go off the rails is startling and because they are not “ interpretable” it can be very difficult to understand why the programs failed to perform the way they were intended. The author discusses dozens of real world examples. I’d seen the book on several “best AI book” lists and would recommend it if you are interested in technical info.Something really counterintuitive to me, is that those who "grew" this, actually do not know how it works under the hood.