“Recent AI Experiences” by abramdemski
Description
In my previous post, I wrote about how computers are rapidly becoming much more whatever-we-want-them-to-be, and this seems to have big implications. In this post, I explore some of the practical implications I've experienced so far.
I mainly use Claude, since I perceive Anthropic to be the comparably more ethical company.[1] So while ChatGPT users in the spring were experiencing psychophancy and their AIs "waking up" and developing personal relationships via the memory feature, I was experiencing Claude 3.7.
Claude 3.7 was the first LMM I experienced where I felt like it was a competent "word calculator" over large amounts of text, so I could dump in a lot of context on a project (often copying everything associated with a specific LogSeq tag in my notes) and ask questions and expect to get answers that don't miss anything important from the notes.
This led me to put more [...]
The original text contained 2 footnotes which were omitted from this narration.
---
First published:
October 3rd, 2025
Source:
https://www.lesswrong.com/posts/pJ2ZRHfTFWPymtkFK/recent-ai-experiences
---
Narrated by TYPE III AUDIO.