Do Something

Be a Better Philadelphia Citizen

One of the founding tenets of The Philadelphia Citizen is to get people the resources they need to become better, more engaged citizens of their city.

We hope to do that in our Good Citizenship Toolkit, which includes a host of ways to get involved in Philadelphia — whether you want to contact your City Councilmember about the challenges facing your community, get those experiencing homelessness the goods they need, or simply go out to dinner somewhere where you know your money is going toward a greater good.

Find an issue that’s important to you in the list below, and get started on your journey of A-plus citizenship.

Vote and strengthen democracy

Stand up for marginalized communities

Create a cleaner, greener Philadelphia

Help our local youth and schools succeed

Support local businesses

Connect WITH OUR SOCIAL ACTION TEAM



Join us!

At Ideas We Should Steal Festival 2025

The Philadelphia Citizen’s Ideas We Should Steal Festival® presented by Comcast NBCUniversal returns for its eighth year on November 13 and 14 and features our Inaugural Ideas We Should Scale Showcase. We are once again bringing changemakers and innovators to our problem-solving table, inspiring change and basking in hope.

Find all the details and pick up tickets for the festival here!

In Brief

How AI is changing humanity

Seventy-five years ago, Alan Turing published the academic paper that is widely considered to have launched the field of artificial intelligence, which asked, “Can machines think?”

The philosophical questions about thinking machines, which he laid out in this paper and distilled into what came to be known as “The Turing Test,” still endure as a foundation for how we measure and debate AI today. Every time we interact with ChatGPT, Claude, or any modern AI interface, we are living in the world Turing imagined, where the line between human and machine intelligence has become increasingly blurred.

David Williams has been writing a series of 75 weekly essays covering 75 years of AI history. Through this endeavor, he’s become focused on the question of how these AI tools are changing us. Should we instead be asking a new question: “Will humans come to think like machines?”

Guest Commentary

Can Machines Think?

On the 75th Anniversary of mathematician Alan Turing’s famous essay, that question may no longer be the most important in our AI future

Guest Commentary

Can Machines Think?

On the 75th Anniversary of mathematician Alan Turing’s famous essay, that question may no longer be the most important in our AI future

Seventy-five years ago this month, mathematician Alan Turing published the academic paper that is widely considered to have launched the field of artificial intelligence (AI). The paper, titled Computing Machinery and Intelligence, opened with a deceptively simple question: “Can machines think?”

Today, Turing is perhaps best known through Benedict Cumberbatch’s portrayal of him in the 2014 film, The Imitation Game, which focused on his wartime codebreaking. But it’s the philosophical questions about thinking machines, which he laid out in this paper and distilled into what came to be known as “The Turing Test” that still endure as a foundation for how we measure and debate AI today.

In simplest terms, The Turing Test was intended to show that if a machine could convince a human interrogator through conversation that it was human, it should be considered capable of thinking. The questions and debates it launched about consciousness, intelligence, and what makes us human continue to this day. Every time we interact with ChatGPT, Claude, or any modern AI interface, we are living in the world Turing imagined, where the line between human and machine intelligence has become increasingly blurred.

The Royal Society in England is marking the 75th anniversary of Turing’s paper with a major celebration, featuring computing pioneer Alan Kay, prominent AI researchers like Gary Marcus, and widely cited neuroscientist Anil Seth; all are gathering to reflect on Turing’s legacy.

Facing a different question now

For the past 14 months, I’ve been conducting my own type of reflection through my newsletter, A Short Distance Ahead, a series of 75 weekly essays covering 75 years of AI history.

Each week, I explored a single year in history to uncover how its broader context shaped the development of AI; for each essay, I used the newest AI interfaces and models to help me research and structure the essays. It’s an experimental project that I embarked on with a deep concern about the impact AI tools will have on human creativity and storytelling, while learning about the history of how they came to be. The title of the newsletter comes from the last line of Turing’s 1950 paper: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”

Over the last few weeks, I’ve come to consider that 75 years after Alan Turing asked, “Can machines think?” we should be much less focused on that question and much more focused on the question of how these AI tools are changing us. In other words: “Will humans come to think like machines?”

We’ve built elaborate frameworks for evaluating machine intelligence — benchmarks, perplexity scores, economic projections. But we have no comparable framework for understanding the most critical transformation of all: what these tools are doing to human thought itself.

The studies exist, scattered across journals, tech publications, and think pieces. But they remain disparate data points, not a cohesive model for understanding our evolving relationship with AI. We’re so busy measuring whether machines can think like humans that we’re not systematically tracking how they’re reshaping us.

It reminds me of something we’re only now reckoning with: smartphones and social media, and how we handed addictive technology to kids without asking what kind of relationship we wanted them to have with it. Now we’re watching a generation struggle with mental health challenges, short attention spans, constant comparison to others, and deficits in human connection.

The solution isn’t to abandon these tools. It’s to approach them with a deliberate awareness of how they’re changing us.

We’re at that same inflection point with AI – and it’s not too late to reflect on the types of relationships we want to have with it.

To follow this more closely and understand it better, I highly recommend subscribing to Julia Freeland Fisher’s newsletter Connection Error. I‘ve only discovered her writing very recently, but she does a great job of surfacing meaningful reports and studies and is regularly “exploring our changing relationship with AI and how AI is changing our relationships.”

Take for example, a study from Upwork showing that workers using AI tools wrote differently; not just when using AI, but even when they were writing without it. Their natural writing had shifted to match AI’s patterns. It’s only one example of how we’re not just using these tools — we’re absorbing them. New research also shows that even productivity apps like TK are designed to create emotional attachment, not just efficiency. We’re building dependencies we don’t fully understand. And when Harvard Business Review warns about “workslop,” it’s not just describing reality, it’s showing us choices we’re making without realizing we’re making them.

The evidence of transformation is being written and talked about by a lot of national publications (albeit the same ones often most criticized for being alarmist about technology or formulaic in their arguments.) For example, The New Yorker has been tracking how AI infiltrates our daily routines, examining what it means when we form “relationships” with systems that only simulate empathy. It has also been reporting on companies like Atlassian betting $600 million that AI browsers like Dia — a browser that promises to “know a lot about you,” and probably represents how we’ll interact with this technology going forward.

Meanwhile The Atlantic has published a series of warnings: Tyler Austin Harper argues that AI dating concierges could erode our capacity for autonomous connection by outsourcing the work of intimacy. Russell Shaw explores why AI chatbots can never truly be children’s friends, by pointing out how they short-circuit how kids learn to handle conflict, Nicholas A. Christakis asks us to consider how AI fundamentally rewires human connection itself.

And these are just the mainstream outlets. Across Substack, independent researchers and writers (many doing more nuanced work than any major publication) track these changes daily, with the most dedicated work being done by The Center for Human Technology. But documenting these changes isn’t enough. What we lack isn’t evidence but synthesis. We’re drowning in observations while thirsting for understanding. We have no unified framework for making sense of how these tools are reshaping us, no equivalent to the benchmarks and leaderboards we use for AI itself.

Each report, each data point, each article is really asking us a question: Is this the relationship we want with these tools?

Without a framework to connect them, the transformation taking place on human behavior feels like it actually is in many ways: fragmented, confusing, and complicated.

Learning from history

So, as I often do, I’ve turned to history. The past reveals patterns, including having shown us what happens when new technologies impact our ways of thinking and offering lessons from those who recognized the transformation as it unfolded. And no one seems to have more passionately thought about this challenge than Neil Postman.

Postman, greatly influenced by Marshall McLuhan, spent his career warning us that technologies aren’t neutral tools, but instead environments that reshape how we think. I turned to his book Amusing Ourselves to Death for perspective after we first elected a reality TV show star for President. In it, Postman showed how television didn’t just change what we watched, but how we processed reality itself, by reducing complex ideas to entertainment. His key insight was that we become what we behold. The medium shapes the mind.

Postman always asked: What kind of people does this technology create?

With AI, I fear the technology is being stewarded by many people (albeit brilliant) who think like machines: optimized, efficient, statistically average, data-rich, but story-poor; all while hoping machines will become more like us.

75 years after Alan Turing, we should be much more focused on the question about how these AI tools are changing us, in other words: “Will humans come to think like machines?”

The solution isn’t to abandon these tools. It’s to approach them with what Postman called “technological resistance,” a deliberate awareness of how they’re changing us. This means working diligently at creating spaces where different logics prevail. Where efficiency isn’t the highest value. And where the statistically unlikely response might be the most human one, and therefore, the most valuable.

Here’s what gives me some hope: We still have time to shape this relationship. And although, unfortunately in the words of George Saunders, I have considerable doubt that we are “a culture capable of imagining complexly”— I still have hope.

I have hope because the real conversations that matter won’t happen in think pieces or on social media. They’ll happen in small moments: parents talking to kids about why they should write their own homework, colleagues deciding together when AI helps and when it hinders, friends admitting they feel strange about outsourcing their thoughts to machines or asking one or two more questions on a topic that they worry about not understanding enough or risk sounding too judgmental about.

It will be messy. We’ll contradict ourselves. We’ll use AI to write about not using AI (as I do in my newsletter). But that’s okay. The act of noticing, of pausing to ask “what kind of relationship do I want with this technology?” That’s where our human agency lives.

Ultimately, I’ve come to believe — maybe naively, or idealistically — that the most powerful form of media today isn’t AI or TV or social platforms or Substack. It’s, as MIT Professor Sherry Turkle has modeled, human conversation, in real life. Where we can work through these questions together. That’s the gap between the ones and zeros that matters most, because that’s where our narratives take shape. That’s where we don’t just process information, but absorb and decide on the stories that we choose to believe and tell ourselves and others. That’s where we’ll figure out how to use these tools without losing ourselves in them.

The next chapter of AI’s history isn’t just about what machines can do — it’s about what we decide to do with them.

And that story is still being written.


David Williams writes the Substack A Short Distance Ahead by using AI to help research and structure his essays. He will be speaking about the history of AI at the 1682 Conference, at the Barnes Foundation on October 8. 

The Citizen welcomes guest commentary from community members who represent that it is their own work and their own opinion based on true facts that they know firsthand.

MORE ON AI FROM THE CITIZEN

Photo by Growtika on Unsplash

Advertising Terms

We do not accept political ads, issue advocacy ads, ads containing expletives, ads featuring photos of children without documented right of use, ads paid for by PACs, and other content deemed to be partisan or misaligned with our mission. The Philadelphia Citizen is a 501(c)(3) nonprofit, nonpartisan organization and all affiliate content will be nonpartisan in nature. Advertisements are approved fully at The Citizen's discretion. Advertisements and sponsorships have different tax-deductible eligibility.

Photo and video disclaimer for attending Citizen events

By entering an event or program of The Philadelphia Citizen, you are entering an area where photography, audio and video recording may occur. Your entry and presence on the event premises constitutes your consent to be photographed, filmed, and/or otherwise recorded and to the release, publication, exhibition, or reproduction of any and all recorded media of your appearance, voice, and name for any purpose whatsoever in perpetuity in connection with The Philadelphia Citizen and its initiatives, including, by way of example only, use on websites, in social media, news and advertising. By entering the event premises, you waive and release any claims you may have related to the use of recorded media of you at the event, including, without limitation, any right to inspect or approve the photo, video or audio recording of you, any claims for invasion of privacy, violation of the right of publicity, defamation, and copyright infringement or for any fees for use of such record media. You understand that all photography, filming and/or recording will be done in reliance on this consent. If you do not agree to the foregoing, please do not enter the event premises.