Change Blindness
From Biopsychology
I was very interested after reading about change blindness in Chapter 7. Change blindness is the phenomenon that occurs in which when we view a scene we have no memory for the parts of the scene that were not in our immediate focus. Basically - we do not see obvious things that we are not “looking for” The example they give is not good because the picture is not alternating. I wanted to see if I would react the same way to an obvious gross phenomenon in a scene.
I found the following video. Check it out!
It is not really a change blindness test per se, but it definitely shows that don’t pay attention to things that we are not focusing on!
Thank you for reading! Share your thoughts with me on bluesky, mastodon, or via email.
Check out some more stuff to read down below.
Most popular posts this month
- 2024
- Reinstalling Windows at 1am
- SQLite DB Migrations with PRAGMA user_version
- My Custom Miniflux CSS Theme
- How to Disable Wayland in Debian Testing
Recent Favorite Blog Posts
This is a collection of the last 8 posts that I bookmarked.
- The Software Essays that Shaped Me from Refactoring English
- Give Your Spouse the Gift of a Couple's Email Domain from mtlynch.io
- Skip the Next iPhone from Articles on Jose M.
- Have smart glasses finally hit an inflection point? from The Torment Nexus
- The McPhee method from the jsomers.net blog
- Pluralistic: LLMs are slot-machines (16 Aug 2025) from Pluralistic: Daily links from Cory Doctorow
- Pluralistic: Bluesky creates the world's weirdest, hardest-to-understand binding arbitration clause (15 Aug 2025) from Pluralistic: Daily links from Cory Doctorow
- Just a Little More Context Bro, I Promise, and It’ll Fix Everything from Jim Nielsen’s Blog
Articles from blogs I follow around the net
On concrete examples
I had some great conversations via email over the past couple of weeks with a bunch of different people, discussing all sorts of things that I’ll for sure end up writing about. Today I wanted to briefly touch on the topic of examples, which was pa…
via Manuel Moreale — Everything Feed October 16, 2025Hacking Workshop for November 2025
For next month, I'm scheduling 2 or 3 discussions of Matthias van de Meent's talk, Improving scalability; Reducing overhead in shared memory, given at 2025.pgconf.dev (talk description here). If you're interested in joining us, please sign up …
via Robert Haas October 16, 2025Should we be afraid of AI? Maybe a little
Almost exactly a year ago, I wrote a piece for The Torment Nexus about the threat of AI, and more specifically what some call "artificial general intelligence" or AGI, which is a shorthand term for something that approaches human-like intelligence…
via The Torment Nexus October 16, 2025Generated by openring