Do Something

Listen to The Road to Accountable AI podcast

You can find it on Apple Podcasts, Spotify, or anywhere else you listen to your podcasts.

Connect WITH OUR SOCIAL ACTION TEAM



Want more of The Citizen?

Sign up for our newsletter

Get Involved

Our guide to better citizenship

One of the founding tenets of The Philadelphia Citizen is to get people the resources they need to become better, more engaged citizens of their city.

We hope to do that in our Good Citizenship Toolkit, which includes a host of ways to get involved in Philadelphia — whether you want to contact your City Councilmember about responsible and accountable development of AI technology, get those experiencing homelessness the goods they need, or simply go out to dinner somewhere where you know your money is going toward a greater good.

Find an issue that’s important to you in the list below, and get started on your journey of A-plus citizenship.

Vote and strengthen democracy

Stand up for marginalized communities

Create a cleaner, greener Philadelphia

Help our local youth and schools succeed

Support local businesses

Is There Such A Thing As Accountable AI?

Wharton’s latest foray into studying Artificial Intelligence looks at the regulatory, ethical and governmental considerations of these technologies

Is There Such A Thing As Accountable AI?

Wharton’s latest foray into studying Artificial Intelligence looks at the regulatory, ethical and governmental considerations of these technologies

On January 20, 2025, the Chinese technology firm DeepSeek sent the American stock market into a frenzy, pushed experts to issue cybersecurity warnings, and inspired Marc Andreessen, the tech mega-investor, to declare it was “AI’s Sputnik moment.” Not since the release of ChatGPT in 2022 had there been such a galvanizing moment in the world of artificial intelligence.

Just like the breakthrough of the Russian satellite during the Space Race, the recent success of DeepSeek — which has developed a large language model (LLM) that’s far less costly and far more efficient than ChatGPT, albeit with loads of security issues — has been a wake-up call to engineers in the United States. And by releasing the LLM model in an open-source fashion (allowing anyone to borrow from and build upon it), DeepSeek will only accelerate the supercomputing arms race of today.

Inevitably, more AI means more anxiety for a lot of people, and for good reason. Automation is imperiling jobs and industries, perpetuating racist stereotypes, creating deep fakes with ease, and being used in attempts to influence our elections. If you’re not scared enough, go look up “autonomous weapons.”

Still, despite those concerns, AI is creeping into our daily lives and increasingly embedded in the fabric of our city. The Philadelphia School District just announced a pilot program, “Pioneering AI in School Systems,” for its teachers, taught by Penn instructors. Our renowned medical research community is harnessing AI in myriad ways. Even our state government is trying it out.

So, as the use cases grow, who is looking at the safety and efficacy of these new tools in the business world? Who is studying the best practices for AI that’s more ethical and effective for society? Who’s developing a blueprint for avoiding the robot apocalypse?

In January, Wharton launched a brand new research initiative, Accountability AI Lab, which aims to study those questions (except maybe the last one) and contribute to better uses of AI in the business world and society.

I caught up with Kevin Werbach, a Wharton professor of legal studies and business ethics, who leads the lab and hosts the Road to Accountable AI podcast. This interview has been condensed and edited for clarity.

Malcolm Burnley: You’ve taught a class on this topic for almost a decade. What’s changed in the world of AI since then?

Kevin Werbach: So, in 2016 [when I began teaching the course] there was actually a lot of AI. People tend to forget that there was a huge wave of innovation and adoption in deep-learning-based AI methods, starting in the mid-teens. But those were mostly [business to business]. Those were mostly things that companies were using to optimize their operations. Then we had the Chat GPT moment in late 2022, and since then, we had this explosion of generative AI activity. And that’s not replacing everything else, but that’s been increasingly a focus.

What’s shifted since then is the degree to which these are not just topics that experts are focused on, but are very much in the public conversation — and in the political conversation. It was, fair to say, maybe in 2005 you know, when Mark Zuckerberg was barely out of his college dorm, that it was unreasonable to expect people to see that Facebook was going to be this massive behemoth influencing global society. But there was a period where that was becoming evident and we didn’t move fast enough. I think with AI, it’s very much in the center of the conversation among major governments around the world and the new administration.

The other thing is that we just had a lot more maturity. Ten years ago, people knew that AI systems could be biased. Now, there’s 10 years more of research and the issues are more and more nuanced. What are the different ways to test for bias? What is fairness? How do we make those choices? What are organizations doing? What are the tools that are available to organizations to test and mitigate that and other kinds of risks and harms? What are the practices around AI governance?

We’re talking a lot more now about responses and solutions, as opposed to just talking about problems, which I think is a healthy thing, but we still have a long way to go. We still don’t know what all the solutions are, and it’s a moving target, of course, because the technology evolves.

Why go with “accountability” instead of “ethical” or something else in the name of your new lab?

For many years, I’ve been one of the faculty who teaches a required MBA course that we offer called “Responsibility in Business.” On the first day of the class, I ask the students: What do they think responsibility means? It actually is interesting and challenging to unpack. What does it mean to be a responsible individual, a responsible employee in a business context? And what does it mean to have a responsible organization?

Kevin Werbach, host of Road to Accountable AI podcast, on Penn's campus.
Kevin Werbach, host of Road to Accountable AI podcast, on Penn’s campus.

We do a word cloud on the screen, and we talk through what those different things mean. And the term that I’ve typically landed on when we have these conversations is accountability. Ethics is an incredibly valuable and powerful thing. But again, no one believes you shouldn’t be ethical, and if we want to understand what that means, then you have to really dive into the important and challenging questions about what it means to be ethical. Accountability, I think, is valuable for a few different reasons. One is that it brings in some notion that there are consequences, and the consequences are not necessarily just punishment. If you do something wrong, the consequences might be a voice in the back of your head that says, this is not really what I should be doing.

You know, the word accountability, it’s got counting in it. It sort of encourages you to think about this not just in terms of feelings, but in terms of structures and measurement and management and governance.

There are a number of relatively new initiatives at both Wharton and Penn related to AI. Where does your lab fit in with those existing projects?

Penn is a very big place. Even the Wharton School is quite a big place. We have 10 academic departments just in the business school. So within Wharton, there is a large initiative that’s called Wharton AI and Analytics. Its predecessor was the Wharton Analytics Initiative, which is now almost 20 years old and has been working on issues regarding the use of analytics, including machine learning and AI-type technologies. At the behest of the dean about a year or two ago, they redoubled and expanded their activity explicitly regarding AI as an area of focus. For example, we have a Generative AI Lab that Ethan Mollick is leading. We have a Healthcare Analytics Lab led by Hamsa Bastani, and so forth. I am interested in having conversations with people in some of those other places and looking for possibilities for collaboration.

How do you imagine partnering with companies that might consider their information proprietary?

I speak to many executives in companies who are very concerned about these issues, about AI governance. For some of them, it’s from a compliance standpoint. There is a good deal of law that is relevant, including new AI-specific laws that are being adopted [along with] existing laws that are relevant because of ways that AI systems might run afoul of them. So I’ve been engaged with companies, and a lot of what I am hearing from them is, We don’t know what everyone else is doing. We don’t know what the best practices are.

So companies know that they’ve got to get their arms around it in a way that doesn’t unreasonably limit their ability to compete and innovate and experiment and gain the benefits of the AI. So, there’s a range of potential corporate partnerships. It could be just informally having conversations, which I’m already doing with executives and companies. I anticipate we will have some sort of mechanisms — whether a formal advisory board or events in the future — where we are bringing together people from different companies. Also, we’ll look to do surveys and interviews, and to publish reports that are relevant to their business decision-making but is an academic question, answering: What is the state of practice [around AI]? We’re basically just now engaged in those conversations.

Are corporations receptive to having these discussions?

If you build a system that blows up and kills people, or you build a companion AI that your users become addicted to and kill themselves [over it], or you build a system that unintentionally but systematically discriminates against certain kinds of people, no company wants to do that. No company wants to stand up after that’s revealed and say, Oh, we didn’t think this was going to happen. We didn’t really do the kinds of things we could have done to minimize those harms. And so these are important issues, regardless of what the political environment is.

You launched in January, amidst a new presidential administration that’s taken action on AI, along with the hullabaloo over DeepSeek. Has the timing of the lab felt prescient?

I study emerging technology. That’s my core research area as a faculty member. I’ve been doing that for 20 years as a Wharton professor, and really 10 years before that. Technology is always moving quickly, and I am always interested in getting involved while it is still maturing, while it’s still uncertain what the impact will be.

When I first started doing Internet policy in the mid-1990s, the Internet was not nearly universal — and yet we could see the potential for it to be totally transformative. And I thought that was really the most important time to be addressing the legal and policy questions that it raised.

No one should be confident they understand the future trajectory of generative AI innovation, or where the opportunities for innovation lie.” — Kevin Werbach

We’re at a similar stage with AI. We’re still very early. Things are still changing very fast. There’s still great uncertainty. There is no question that there’s a tremendous amount of hype. But there’s uncertainty about how real some of the business benefits will be or how long they will take. There is uncertainty about the business model for all these companies. There is uncertainty about the speed of advancement of the technology. DeepSeek is a good example. The notion that the biggest companies with the most computer resources would inherently have an advantage, people are now questioning that. But the fact that we don’t know exactly what the AI world is going to look like in five years, or even six months? That we’ve known from the beginning.

What’s caught your eye about the conversation following the latest release of DeepSeek’s LLM?

The most shocking thing about the response to DeepSeek is how shocked people are. No one should be confident they understand the future trajectory of generative AI innovation, or where the opportunities for innovation lie. And no one should be surprised that China has outstanding AI engineering talent. That being said, as we get more information, it looks like DeepSeek is more a story about the uncertain impacts of open source AI than a fundamental re-setting of AI cost expectations.

President Trump issued a series of executive orders around AI, which included efforts to deregulate the sector and dismantle a Biden policy to deter against AI bias. Has anything surprised you from the White House so far on this front?

From a policy standpoint, they’ve done exactly what they said they would do, which is to vacate the Biden administration’s executive order and to make it a priority to promote U.S. AI development and deployment. Perhaps what I hadn’t thought about as much was what we are seeing with Elon Musk and the [Department of Government Efficiency] and how much of a focus AI is in terms of this administration’s view of how they want to operate the government. That’s getting into all of these issues, not from a regulating standpoint, but from an operational standpoint. But we’re still early. I obviously couldn’t have predicted exactly what has happened the past couple of weeks, but there’s nothing where I would say, Oh, that’s a shocker. Because, again, this was an issue that was discussed in the campaign. This was an issue where President Trump and those around him had very clear views that they were not shy about describing.

You referenced a missed opportunity for government regulation of social media companies in the past. What’s the appetite among legislators for intervening more forcefully with AI?

Will we have AI legislation at the national level? That’s something that’s going to take a lot of time and work on the part of the Congress. Even if you have a deregulatory government in Washington — in fact, especially if you have a deregulatory government in Washington — they are going to need to act if they don’t want to cede control of this issue to the states, which are very active. And so the more that states push forward with AI legislation, the more pressure there will be to preempt that at the federal level. It’s too early for that dynamic to fully unfold, but that’s certainly coming. It’s not just a matter of there being AI law under Biden, and now it’s gone. This is a very complex dynamic situation. Plus, the European Union has an AI Act they are implementing. They’re not going to step back from implementing it because the U.S. takes a different viewpoint.

MORE TECHNOLOGY SOLUTIONS FROM THE CITIZEN

Photo by Igor Omilaev on Unsplash

Advertising Terms

We do not accept political ads, issue advocacy ads, ads containing expletives, ads featuring photos of children without documented right of use, ads paid for by PACs, and other content deemed to be partisan or misaligned with our mission. The Philadelphia Citizen is a 501(c)(3) nonprofit, nonpartisan organization and all affiliate content will be nonpartisan in nature. Advertisements are approved fully at The Citizen's discretion. Advertisements and sponsorships have different tax-deductible eligibility. For questions or clarification on these conditions, please contact Director of Sales & Philanthropy Kristin Long at KL@thephiladelphiacitizen.org or call (609)-602-0145.