Guelph U launches AI ethics centre amid debate around data privacy, bias
A sign is seen at the entrance to the University of Guelph on Friday, March 24, 2017. (THE CANADIAN PRESS / Hannah Yoon)
Christopher Reynolds, The Canadian Press
Published Thursday, December 13, 2018 6:35PM EST
The University of Guelph is launching a new hub for artificial intelligence to grapple with ethical questions amidst growing concern around issues of privacy, bias and human-machine interaction.
Zeroing in on the moral side of everything from medical imaging to automated credit card approvals, the Centre for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI) aims to bring together experts to study and teach humanist approaches to AI.
Graham Taylor, a machine learning expert named as the centre's academic director, says AI carries the seeds of both harm and progress, depending on how it's nourished.
"There has to be, with any AI, essentially a tradeoff between what you can do with the technology and how much you're willing to violate people's privacy in order to achieve those technological objectives," Taylor said in an interview.
"We're still fully in control of these systems, so it's the right time to drive toward human values...potentially before it's too late, because there are things that could go wrong."
Protecting people's right to privacy and weeding out bias in big data will be one of his key concerns, as ethical issues continue to draw attention everywhere from Silicon Valley to Toronto's waterfront.
There, Google-affiliated tech company Sidewalk Labs' proposed use of sensors to measure residents' movements in Quayside, a planned neighbourhood near downtown, has sparked debates over facial recognition, data access and profit.
Sometimes technical solutions can resolve the tension between privacy and data collection. In one recent lab experiment, Taylor's team used carbon dioxide detection rather than cameras, which are more invasive, to determine how many people occupy a space and set the room temperature accordingly.
Built-in prejudice is another issue, said Jennifer Chayes, managing director of Microsoft Research in Cambridge, Mass.
"If I am trying to hire a software development engineer, and I look at what are the characteristics of the successful software development engineers I have in my pool, what you might conclude is that being male is one of the key characteristics," she said.
The new centre will take aim at regulations and public policy related to the ethics of AI technologies, the university said. Researchers will also apply AI to human and animal health, environmental sciences and food and agriculture.
Mary Wells, dean of the College of Engineering and Physical Sciences, highlighted the use of field data and precision agriculture to apply soil nutrients efficiently with minimal environmental impact.
"There are only so many resources on the planet. AI will allow us to optimize the limited resources we have and equalize opportunities," she said.
"Often we think of technology in isolation from social applications. When we use this intellectual property, we have a responsibility to society to do good."
Working with an advisory panel of academic and industry leaders, the centre will draw on nearly 90 researchers and scholars from units across campus, including the Arrell Food Institute and the Centre for Biodiversity Genomics.
It will introduce more AI-related courses and new graduate programs, the university said. The centre will also support a new collaborative specialization in AI approved for funding under an initiative administered by Toronto's Vector Institute for Artificial Intelligence, said Taylor, who is a member of the organization.