46 Comments
Feb 15·edited Feb 15

Snowed for us in WA too. Got a day off work and am sucked in to the longest uninterrupted prolonged court proceedings I can remember since OJ. Question for you all... Why not just let AI try cases against people? AI can dig up all the evidence that people can, and could probably even interview or depose witnesses. Also, AI can't sleep with its colleague (as far as I know). This whole thing reveals the paranoia and taboo about sex in our culture. Two lawyers working on a case together are entitled to spend time together, buy each other lunch, do whatever they want, but if they touch each other it becomes an all day television spectacle!!!

Expand full comment

Hey off topic but how do I join the book club? Thanks!

Expand full comment

given the USA gov can't seem to be truly scientifically innovative outside of the $$$$ to the "defense" industry. also given, dumb bombs are already a threat to human survival. good chance AI advances will lead to a clockwise vs counter-clockwise doomsday clock tick

Expand full comment

AI will shake things up big time. Not because it will become evil and wants to take over the world, but because intelligence is so universally applicable and how fast it is coming on the scene. One of the dangers will be an increase in depression and mental illness that I recently wrote a blog post about: https://guustaaf.substack.com/p/how-ai-can-cause-depression-and-mental

Expand full comment

Maybe a decade ago I read a short story by Marshall Brain called Manna. It highlighted what the author saw as the worst and best possible outcomes of AI. He's evidently made it available free online here:

https://marshallbrain.com/manna1

Have to say I find the first half (worst outcome) far more plausible than the latter under the brutal social and economic policies that have come to be accepted in the US today. Worth a read though!

Expand full comment

yes, people dont like change. Oooh, scary AI!

Most of why this is silly, is already explained very well in this thread, but here is one more thing to consider: humans are afraid of AI is because we anthropomorphize. So we assume that AI will be driven to domination, but forget that it takes an ego to have that drive. Humans are the only species that kill for fun and so we expect to be taken down by a creature even more nefarious than ourselves. Well, AI is a product of humanity and as such, regardless of new features, sensors and models, will still be but only a shadow, a reflection of humanity, not its true representation. Sure, you can make AI soldiers and the destructive power can cause real damage, but I still believe we would be brought down by something brainless like a virus before computers grow sentient and decide that a world without their creators is better.

Expand full comment

If the possibility of an intelligence explosion is literally anything above zero percent, which I think it certainly is, then the alignment problem is the single most important problem humans have ever faced and will ever face.

Expand full comment
Feb 4Liked by Chris Ryan

Hi Chris,

I think AI highlights what we really are;

creative beings of a spiritual nature

( whatever that is).

AI will be useful in law and politics as it has no personal agenda doesn’t lie and doesn’t have friends in high places.

AI will free up people to be creative and enjoy life. Not just duped into thinking we were put here to work and make rich people richer. Also given the false aspiration if we work hard enough we’ll get a piece of that pie.

As usual something new appears on the horizon and the fear mongers start rattling their cans to get attention.

Bit of caution and curiosity helpful in this situation.

Cheers

Scott

Expand full comment

I’m sure early man's first experience with fire was met with tremendous trepidation and fear. Many probably thought, to some extent, fire shouldn’t be a thing we should be messing with. That it would ruin life as we know it. Who really knows what tribal non-linguistic proto-homosapiens thought, but deductive reasoning intuits that I’m probably not far off.

And now look where fire has taken us today.

Ultimately AI will most likely lead to great things. Might reduce the population when there are fewer and fewer jobs for people.

As far as AI becoming “conscious”, unless consciousness is an emergent computational phenomenon, then doubtful. Modern entertainment media has a grip on the minds of man the way the bible did some time ago. Meaning, people watch too many movies.

And some of the biggest contributors to our modern techno-mediated world are some of the biggest fucking tools. We’re literally being cocooned in their sublimation. (That's a whole other topic)

My issue with not only AI but technology, is people's dependence on it.

For example, when astronauts stay in space for long periods of time, hanging out in the international space station for a year, when they come back they need to undergo physical therapy because their bodies have atrophied from non use.

Perhaps the same thing is happening to the mind. We’re offloading logic, reason, and rationality and letting digitized computation think for us. Most people I’ve experienced, when they cannot remember something, pull out their phone. Are we offloading our capacity to memorize? What prolonged effect will this have? Especially in childhood development, one of the most crucial and important stages in our lives, now that younger and younger people have access to these devices.

The human condition is atrophying in the shape of technologies second hand will.

We digitize the mind to gamify it. Our deeply rooted fears of chaos, death, and crisis, will unconsciously externalize into the insatiable need for power, control and greed. And unless we clean these repressed fears we’re destined to become passive obedient slaves to our unconscious, giving in to controlled subservitude to whoever owns the biggest computer.

Or maybe I’ve just watched too many movies.

Expand full comment
Feb 4Liked by Chris Ryan

From what I've heard listening to those on the more skeptical side such as Chomsky and physicist Sean Carroll, one thing separating our intelligence from that of the current AI large language models is that AI does not model the world. When humans think of a real world object or concept, if we have enough experience with it we have a general idea of how it operates in the world through space and time.

AI can give amazingly fast and accurate answers on certain things, but can be tricked by scenarios that are simple for humans to understand. One example I think Carroll has talked about was telling ChatGPT that someone heated a pot up on the stove and then moved it away from the stove, then asking if it would still be hot 24 hours later, and ChatGPT said yes (it was something like that...). While humans intuitively understand why the pot wouldn't be hot because we operate in the world with our senses and know lots of general things about heat and time passing and stoves and pots, the AI's experience of learning is built on consuming and predicting text (Most basically, ChatGPT is just predicting the best next work, but this prediction is based on mountains of data and training). It can be told it's answer is wrong and correct itself for next time, it probably already has for this type of scenario, but it doesn't intuitively know why it was wrong.

Humans and animals, taking things in with our senses, learn a lot with little information, while AI based on large language models learn by brute forcing a TON of information with a ton of trial and error. Also, AI doesn't have a 'stream of conscious' like we do. As far as I understand, you input a question, the AI processes it and returns an answer. Unless it's actively responding to input or being trained, it's not 'thinking' anything.

It will be interesting to see if this form of AI, based on text prediction, ever does eventually model the world like us. Maybe enough data and training will lead to a phase transition where it starts to. But it's hard to imagine certain understandings developing if AI's 'thinking' is done entirely with text/image processing, without seeing, feeling, tasting, smelling, and hearing, and then putting all these senses together like we do.

I'm skeptical of it being an existential threat any time soon, but AI as it is now is still pretty amazing for what we use it for. There's probably plenty of industries involving analyzing text and data that will be affected. I think it might cannibalize the search engine industry a lot. I've noticed the accuracy of search engines declining when I search for specific technical questions due to the spaminess of the current internet, and ChatGPT a lot of times gives better answers. Ironically(?), it will also contribute to the decline of the usefulness of search engines and the web in general as more content online is generated by AI for $$$ farming purposes.

Sorry for the rambling essay.

Expand full comment

I recall Y2K, Mayan calendar, Ebola, and the Second Coming. I took none of them too seriously. As for AI, I hope it is not a retelling of "The Boy Who Cried Wolf." In the meantime, I have used AI to get me started on writing assignments, and it does a fine job. Errors of course, but "Nobody's Perfect" (as Joe E. Brown said at the end of "Some Like It Hot."

Expand full comment

Worth bearing with the background noise here

As Zac Rodgers offers an excellent argument for why AI - chat gpt for example - is a just a plagiarism technology, is strategically ambiguous in relation to capital and could just be as Chris mused another false dawn.

https://podcasts.apple.com/de/podcast/the-big-tent-podcast/id1339367201?i=1000625395878

Expand full comment
Feb 3·edited Feb 3Liked by Chris Ryan

Being seen as a threat to human survival I think is a far cry. People can debate "consciousness" and "sentience" of AI back and forth all they want. For me, what I think is the biggest delineation between humans and AI is the unbound nature of humans. We can operate independently. We can make decisions, movements, and actions with only the freedom of our internal interests/desires, if we so chose. I imagine someone will check me with an article that highlights this potential in AI. However, I think most AI, and most that will be developed in the near-to-mid future, will require input or direction from a user. AI won't spontaneous begin to optimize the business models of every company. Humans will employ an AI to do so. In that, I believe we humans won't create on overwhelming amount of AI that could replace humans by force.

That said, it will dramatically change the way we live as a society. That hype isn't overblown, but it also cannot be fully understood until it happens. When social medias first came out, no one anticipated it would lead to genocides in southeast asia, a dramatic increase in depression/suicide for middle school girls, or a re-wiring of people's brain chemistry to shorten attention spans and become addicted to "cheap" seratonin dumps in the form of influencer reels. There may be a fabricated hysteria about how our lives will change, but I imagine humans will work to salvage the things we are already actively concerned about. What we will lose more of are the things we aren't currently anticipating. AI may make it financially unfeasible to not mass-produce food without CRSPR and GMO technology. Not only highway trucking, but all heavy machinery (planes, tractors, trains, etc.) will become computer operated. A lot of pop music will be AI produced, if not also "sung"... I'm not coming up with great original ideas here, but I hope the point stands.

Expand full comment

I think that this article from Ted Chiang provides some valuable information about AIs.

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web

I hope you can read it, after the first article it's behind a paywall. It's worth to clear the cookies and website data for reading it.

I asked to 2 experts about their opinions on the article and they both said it is a reasonably good explanation, at least how good analogies can be.

Expand full comment

Currently, we are primates perpetuating piss lines with Promethean fire. If we were to all wake up tomorrow with amnesia and take an account of the world that's wired on hair triggers to explode, the concern of AI would not be at the top of our list to debate the pros and cons of. I think we would leverage it to help us dismantle this preposterous position we have found ourselves in. It would give us a mirror of who we were and who we can choose to be without the felt connection to our psychotic past.

Expand full comment
Feb 3Liked by Chris Ryan

I agree with Andreas, difficult to say but with the way things seem to be trending, a proper zoo for our species might not be too bad. I'm starting to feel like the older people we joked about as kids who would always say "Ahhhh, the world is going to hell". I'm always asking myself, is the world really getting that bad, or has the world always been this bad and I just had to get a little older and mature to realize it. Although, I do think things are changing at a more rapid pace than ever before so we are in new territory in that regard. Hopefully the AI we create will treat us like a good dog owner treats its dog. That wouldn't be too bad, things could be worse.

Expand full comment

Serious question guys! 49ers or Chiefs?

Expand full comment

Not a major issue for creatives (yet). It will be eliminating some low hanging fruit (jobs) in the service industry anytime now. I have some experience of coding and early AI experimentation , but few have the computing power to run the latest models at full strength.

Panic? No.

Unregulated capitalism? Yes.

Expand full comment
Feb 3Liked by Chris Ryan

I’m a bit scared of it. I have no idea the full ramifications of AI but seeing that it may start writing books, screen plays, and otherwise creating art much more efficiently than any human, makes me feel nauseous. I see a future with cheap AI generated children’s movies programming them to enjoy an AI generated future. Maybe we will be safer and perhaps even work less but we will surely be more disconnected, more depressed, and less interesting people. It’s a Brave New World out there.

Expand full comment
Feb 3Liked by Chris Ryan

The first thing which came to my mind was “Prediction is very difficult, especially if it's about the future. “

The other was https://xkcd.com/1968/ (mobile version: https://m.xkcd.com/1968/)

The third was the headline of Heidegger’s last interview “Only a God Can Save Us” (https://en.wikipedia.org/wiki/Only_a_God_Can_Save_Us ). Maybe we're just about to create the God and will end up in the proper zoo, suited to our species, which Chris mentioned oftentimes. Although this exactly is the opposite of what Heidegger meant, whom I don't like anyway.

I think things will change drastically and I wonder how many such changes I am able to cope with. During my lifetime there were quite a few already (collapse of Soviet Union and Warsaw pact states, 9-11, internet, smartphones, covid). And now AI and crispr.

Expand full comment
Feb 3·edited Feb 3

I hear AI does make mistakes so that needs to be sorted out before we can know how well it will do in the job market.

Expand full comment

I agree with Bill Andresen. It will definitely not disappear like y2k did, but will have significant positive and negative ramifications...

Expand full comment

Isn’t it possible it’s a little of both. Will most likely have a negative impact on the people who lose their jobs but likely very beneficial for those impacted positively. Most promising is are the benefits in the biomedical realm where it promises to diagnose and prescribe treatments better than humans.

Expand full comment

I started discussing AI risk with my best friend back in 2015 when we discovered Eliezar Yudkowski and the LessWrong forums. Yudkowski is not the most careful thinker, but he got a lot of smart people talking about this issue a good while ago, and since then, they've made some compelling arguments about the risks.

These arguments are not generally approachable enough to generate clicks, and so generally don't appear in articles or media conversation since AI went mainstream. But that early intellectual community influenced lots of people in tech, which is what's caused many of them to come out to loudly and forcefully about the dangers so early on. So any time I hear someone say that it's no different than the printing press or it's just sensationalism, I feel they're generally missing a big part of the plot.

Expand full comment

More language peeves.

When I left the English-speaking world in the '70s, a widely used synonym for "permit" was "allow" — you allowed something. Now you have to allow FOR something. The problem with this is that "allow for" used to mean Take Into Consideration. This has been lost.

Language peeves are a dime a dozen of course. And language changes all the time, so new things are bound to wander in & grate on da ear. But some of them sound so wrong.

Another one is "going forward" to mean In Future. Is that as opposed to "going backward"?

Expand full comment

Essentially everything is to generate clicks and sell papers. We have to assume that everything we read has been exaggerated, because it has. A person sat down and thought of the best hook or shock factor for any story. All to elicit some kind of reaction from you, because that’s the only way they know if it worked, and if the content was actually consumed.

Expand full comment