Aggregated News
After a year of heavy scrutiny and seemingly endless controversy around artificial-intelligence (AI) technologies, the field’s most prestigious conference has tried to set a good example. For the first time, the Neural Information Processing Systems (NeurIPS) meeting, which took place completely online this month, required presenters to submit a statement on the broader impact their research could have on society, including any possible negative effects.
The organizers also appointed a panel of reviewers to scrutinize papers that raised ethical concerns — a process that could lead to their rejection.
“I think there’s a lot of value even in getting people to think about these things,” says Jack Poulson, founder of the industry watchdog Tech Inquiry in Toronto, Canada. He adds that the policy could help to shift culture in the field.
Researchers who work on machine learning are increasingly aware of the challenges posed by harmful uses of the technology, from the creation of falsified videos, or ‘deepfakes’, to mistakes by police who rely on facial-recognition algorithms in deciding who to arrest.
“There was previously a period of techno...