The state comptroller’s office is calling for stronger oversight of New York City government’s artificial intelligence programs, after an audit identified lapses that it says heighten risks of bias, inaccuracies and harm “for those who live, work or visit NYC.”
The audit of four city agencies from January 2019 to last November turned up inconsistent, “ad hoc and incomplete approaches to AI governance,” and no rules at all in some areas. Without proper oversight, according to the findings, “misguided, outdated or inaccurate outcomes can occur and may lead to unfair or ineffective outcomes.”
Some of the sampled agencies – those audited were the NYPD, the Department of Education, the Department of Buildings and the Administration for Children’s Services – identified potential AI risks and took mitigation steps, while others had no such systems in place.
None kept formal policies on the “intended use and outcomes” of the tool, the report said.
“NYC does not have an effective AI governance framework,” according to the findings. While local law requires city agencies to annually report certain algorithmic tools they use, the auditors noted that “there are no rules or guidance on the actual use of AI.”
Comptroller Thomas P. DiNapoli recommended a clear list of AI programs used by the city government, explanations for why they’re being used, and advocated standards to prevent bias and inaccuracy.
“Government’s use of artificial intelligence to improve public services is not new,” DiNapoli said in a statement. “But there need to be formal guidelines governing its use.”
The mayor’s office did not immediately respond to a request for comment on the survey results.
According to the report, a 2019 executive order established a reporting framework of “algorithmic tools, policies and protocols” to guide the city and its agencies in the “fair and responsible” use of AI. It also called for setting up a process to resolve complaints…
Read the full article here
Leave a Reply