On Day 3, I attended prof. Fred Lederer's wide-ranging and very interesting talk on legal aspects of Artificial intelligence and the Internet of Things. This summary of what I think were the highlights will not do it justice.
As algorithms are becoming such a determining factor, there was a suggestion to treat algorithms like corporations.
The essence of data analytics is that we will find things we did not know were there.
Good source for the Internet of Things: article by Maciej Kranz in Harvard Business Review.
Will existing law answer issues posed by the new technology? What is the "right" answer for the society involved?
What if the police want to pull over an autonomous self- driving car for speeding, while the human inside the car is not driving it? As a judge - professors are good at framing questioins, but judges need to answer them - my first question is: can a self-driving car speed? Or is it designed to obey all traffic rules including speed limits? I would say yes.
And what about someone who hacks a self-driving car in order to cause an accident with it? Again, there is an unanswered quesstion behind this one: what constitutes the crime? Hacking? Causing the accident?
Pretrial detention algorithms are being used by judges more and more. It is also found to be problematic, for instance biased against African American defendants. My observation is that we know very little about the causality of the factors involved in recidivism.
In the conference's Endnote, all big themes were addressed:
Cybersecurity
Innovation
What comes after digitization?
Artificial intelligence
Designing for the user
User testing for innovation - jury service on their phone
Translation technology
Agility, allowing for failure
Organizational ITmaturity
Next generatioin apps
Cloud computing and surviving thehurricane. In one of the courts, justice never stopped in spite of the storm.
New data uses - e-notifcations reduced Failure to Appear by 50-20%.