Chris Russell

Group Leader at The Alan Turing Institute

Chris Russell is a Group Leader in Safe and Ethical AI at the Alan Turing Institute, and a Reader in Computer Vision and Machine Learning at the University of Surrey. His recent work on explainability (with Sandra Wachter and Brent Mittelstadt of the OII) is cited in the guidelines to the GDPR and forms part of the TensorFlow “What-if tool”. He was one of the first researchers to propose the use of causal models in reasoning about fairness (namely Counterfactual Fairness), and continues to work extensively on computer vision, where he has won best paper award in two of the largest conferences BMVC and ECCV.

SESSIOn:
Why AI Fairness Cannot Be Automated
Watch Session
Chris Russell