It’s getting harder and harder to find a good fall guy. Think about it: Porngate, the Attorney General went down. Chaka Fattah went to prison. And the Johnny Doc dragnet of a few years ago has resulted in not only underlings being convicted, but also top players like Bobby Henon (and soon, barring lax sentencing, the union honcho himself).
These days, though, political scapegoating just ain’t what it used to be.
Enter Sheriff Rochelle Bilal, who has faced myriad ethical questions since taking over the job in 2020, despite campaigning as a reformer who’d clean up the embattled office. The latest embarrassment was the result of an Inquirer investigation that found more than 30 fake headlines listed on her campaign website that offered flattering accounts of her tenure. Although the headlines were credited to local news organizations, none of them corresponded to actual stories, meaning website visitors were left with a positive impression of Bilal’s imaginary accomplishments.
After taking down the content, Bilal’s campaign issued a statement that blamed it on — wait for it — Chat GPT. “Our campaign provided the outside consultant talking points which were then provided to the AI service,” the statement read. “It is now clear that the artificial intelligence service generated fake news articles to support the initiatives that were part of the AI prompt.”
Generative Artificial Intelligence such as Chat GPT is kind of a perfect fall guy. You give it some directions, and it comes back with a job that’s done, no questions asked — and even better, with the ability to turn off search history, you can’t get it to explain why it chose the text that it did or what prompted it in the first place (for example, whether Chat GPT was asked to mimic positive headlines or asked to find genuine ones online). This quality imbues a degree of plausible deniability for leaders when Chat GPT gets misused.
While other states have placed legal limits on AI — Maine last year banned executive branch workers from using ChatGPT — Governor Shapiro intends to make PA a leader in “the safe and responsible use of generative AI.”
The bogus Bilal headlines showcase the dangerous uses of AI in electoral politics, which has already become a storyline in the upcoming presidential race. However, there’s a more promising role for AI to play in the day-to-day running of the government.
In fact, Governor Shapiro announced just last month a first-in-the-nation partnership with Chat GPT’s parent company Open AI to pilot generative tools for state employees. While other states have placed legal limits on AI — Maine last year banned executive branch workers from using ChatGPT — Shapiro intends to make PA a leader in “the safe and responsible use of generative AI,” according to a press release. That announcement came on the heels of President Biden signing an ambitious executive order on AI. And the state legislature considered no less than nine pieces of legislation last year seeking to regulate or create precedent for how to properly use AI.
Currently, the city’s Office of Innovation and Technology does not have a formal policy surrounding AI — and instead, has focused on implementing new technologies on a limited basis, with projects like ZenCity (used to track public opinion on Covid preparedness through 311 data and social media posts) and SmartBlockPHL (a program that implemented 14 streetlights that collect intricate metadata on air quality and more).
It’s not yet clear how Mayor Parker views AI utilization and regulation within City Hall. But with AI increasingly in the forefront of our lives, the new mayor and City Council will need to establish a clear strategy. Here are some places to start, from other cities using AI for good:
Making roads safer
When Mayor Kenney came into office, he embraced Vision Zero, the national movement calling on policymakers to eliminate all fatalities and serious injuries resulting from our roads. But by many measures, the situation has only deteriorated since the pledge. Philly now boasts a traffic fatality rate double those of New York City and Boston, and as JP Romney reported for the Citizen, we are averaging “60 more people [who] are dying each year than in the years before we tried to fix things.”
The California Office of Traffic Safety is trying to curb traffic accidents by using generative AI tools, while calling on tech companies to help develop more sophisticated solutions. For example, the state has put out a procurement request for a warning technology like CA’s earthquake warning system, but for traffic, to alert drivers a few seconds beforehand of impending dangers, such as a wrong-way driver, unexpected fog, or wildfire smoke. Meanwhile, a handful of states have turned to companies like INRIX, a transportation analytics firm which provides AI solutions to improve traffic flow, safety, and congestion.
Locally, SEPTA is already experimenting with AI with a few pilot programs. After one initiative successfully detected hot spots for illegally parked vehicles and other obstacles in bus lanes, city officials gave the go-ahead to begin ticketing drivers using the same AI technology. Another tool is being used to detect guns among riders, and a third, a partnership with Penn State, is aimed at bolstering efficiency.
And then there’s potholes. Earlier in the pandemic, the city placed a bunch of high-tech sensors on city vehicles to measure road conditions and identify potholes, but the program was not expanded. It might be time to revisit the role of AI here, seeing that the situation remains rough. Cities around the country are leveraging various AI-driven tools to save our hubcaps.
Compared to human-only designs, the scientists found that AI-supported designs “could increase access to basic services and parks by 12 and 5 percent, respectively.”
City planning
The world of architecture and urban planning was startled by a study published last year by scientists from Tsinghua University in China. The study concluded that AI systems had mastered many of the monotonous and time-consuming tasks involved with designing a better-built environment, like scanning the labyrinth of zoning codes or assessing the impact of a development project on traffic routes. Compared to human-only designs, the scientists found that AI-supported designs “could increase access to basic services and parks by 12 and 5 percent, respectively.”
Google’s “Tree Canopy” is another tool for planners, using aerial imaging that determines urban green coverage and provides block-by-block knowledge about heat retention and air pollution. This could be a game-changer for Philadelphia, which boasts an “F” grade for air quality from the American Lung Association. Philly’s legacy of racial and class segregation has left our poorer and Black and Brown communities the most vulnerable to those effects, in part because many neighborhoods lack green space and foliage, damaging both mental and physical health.
Another promising application is the ability of AI to rapidly forecast changes to the urban environment by using real-life specifications in a “digital twin” environment. Arch Daily recently reported on multiple AI tools that offer this sort of “rapid urban prototyping.” These tools allow urban planners to test out the impact of, say, a new high-rise condo building, on, let’s say Chinatown — and through design alterations, gives them the chance to mitigate negative externalities like traffic and noise, ahead of any groundbreaking.
Boosting communication
Mayor Cherelle Parker’s new media and communications policy — requiring every social media post, interview request, or statements of any kind from every single city department to run through the mayor’s office — seems like a recipe for backlog and confusion. But proponents of AI believe that ChatGPT could do the opposite: Increase the communications efficiency between local government and the public.
These functions could take many forms. We’ve seen political opponents using ChatGPT to launch fake Biden robocalls. But what about automated calls and emails from city departments regarding Council hearings, zoning changes, and neighborhood events — which could go out as soon as the information gets published on a city website? ChatGPT could also be used to generate summaries of hard-to-understand policy language, translating legalese for the layman, or even to summarize civic meetings that few residents are able to attend. And another potential use: combing through citizen complaints, 311 data, and survey results to find correlations that could inform policymakers.
Of course, there are risks that must be considered and mitigated before widespread adoption. Using ChatGPT as an outsourcing tool to eliminate communications jobs could result in more problems, if it lessens government accountability or spreads misinformation. On the other hand, given the worker shortage in City Hall, where 20 percent of jobs are unfilled, this could free up employees to do more essential work.
Experts have noted cybersecurity concerns and warned that the prompts entered into Chat GPT by city employees could be subject to public records requests, which means that limiting how it gets used with respect to sensitive or privileged information would be essential.
All of this emphasizes the need to develop thorough safeguards and standards around the use of AI, like what the Shapiro administration has begun to do on a statewide level. While Mayor Parker’s transition team included several tech leaders, no decision has been made on appointing a chief information officer to run the Office of Innovation and Technology, which is currently headed by interim leadership. Whoever that person is, there will be a lot more work to be done — not only to curb the risks, but to embrace the rewards.
MORE ON TECHNOLOGY AND THE CITY FROM THE CITIZEN
Photo by Adi Goldstein on Unsplash
The Philadelphia Citizen will only publish thoughtful, civil comments. If your post is offensive, not only will we not publish it, we'll laugh at you while hitting delete.