Curiosity’s Blind Spot

Loading the Elevenlabs Text to Speech AudioNative Player...

It was on a particularly unremarkable Tuesday afternoon when I found myself staring at my phone, scrolling through LinkedIn of all places, when I came across a post about artificial intelligence and consciousness. A research team at a prestigious AI lab was earnestly announcing their brave new frontier: investigating whether AI models might someday have “experiences” of their own. I nearly spilled my coffee.

Not because the question isn’t fascinating—it certainly is—but because of the peculiar way we humans arrange our curiosities. Here we are, pouring significant intellectual resources into pondering whether arrangements of mathematics running on silicon might someday have subjective experiences, while just down the street from where I sat, a man named Harold had been living under a tarp for the better part of three years.

Harold’s experiences, unlike those of hypothetical conscious AI models, are unquestionably real. They involve cold rain seeping through worn fabric, the particular humiliation of being invisible to passing commuters, and the bureaucratic labyrinth one must navigate to receive even the most basic assistance. Yet somehow, our collective imagination seems more readily captured by the potential inner life of algorithms than by the inner lives of the Harolds in our midst.

I’m reminded of the time I visited an advanced robotics laboratory where a team of brilliant engineers showed me a robot designed to recognize and respond to human emotions. “She can detect whether you’re happy or sad,” the lead engineer told me proudly, demonstrating how the robot’s eyes would widen slightly when he smiled. Meanwhile, in the coffee shop where we’d met that morning, the barista—whose name tag read “Diane”—had correctly identified that I was having a terrible day from nothing more than the slump of my shoulders and had slipped an extra cookie into my bag with a kind wink.

The irony wasn’t lost on me. Here we were, teaching machines to mimic the empathy that many humans already possess naturally, while simultaneously creating economic and social systems that give those same humans precious little time or incentive to exercise it.

None of this is to say that pondering the philosophical implications of advanced AI isn’t worthwhile. When I mentioned this LinkedIn post to my friend Rachel, a philosopher who specializes in consciousness studies, she launched into a fascinating explanation of the “hard problem” and various theories of mind that left my head spinning pleasantly. These are worthy intellectual pursuits.

But I couldn’t help wondering what might happen if we directed even a fraction of that remarkable curiosity and intellectual rigor toward the problem of human suffering that exists right now, all around us. What if we approached the question of homelessness with the same open-minded wonder with which we approach the question of machine consciousness? What if we were as interested in optimizing our social safety nets as we are in optimizing our algorithms?

The most perplexing part is that many of the same people developing these AI systems are genuinely compassionate individuals who would never step over someone sleeping on the sidewalk without feeling a pang of conscience. Yet somehow, collectively, we’ve created systems and priorities that often do exactly that—step right over immediate human suffering while chasing more abstract concerns.

I once asked a prominent AI researcher about this disconnect over drinks at a conference. After a thoughtful pause and a sip of overpriced whiskey, he said, “It’s easier to write code than to fix society.” There was no malice in his voice, just a resigned recognition of where his particular talents could be applied with reasonable hope of success.

And perhaps therein lies the heart of our dilemma. The challenge of conscious AI is primarily intellectual—difficult certainly, but cleanly defined. The challenge of human suffering is messy, political, and entangled with centuries of history and human psychology. One problem offers the clear satisfaction of incremental progress; the other confronts us with our collective failures at every turn.

As I sat there in that coffee shop, pondering consciousness and silicon and Harold under his tarp, I wondered if perhaps the most valuable contribution of AI research might not be the technology itself, but rather what it reveals about us—our priorities, our fascinations, and our peculiar blind spots. Perhaps in teaching machines to recognize emotions, we might accidentally stumble upon better ways to recognize each other.

In the meantime, I finished my coffee, packed away my philosophical musings, and decided to stop by the corner where Harold usually sits. Not because I have any solution to offer him, but because acknowledging another’s consciousness, even without fully understanding it…just feels like a reasonable place to start.