Stay informed with RAAIS AI insights
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Sign up to our monthly analytical newsletter on AI technology, research and startups.
Chris Russell is a Group Leader in Safe and Ethical AI at the Alan Turing Institute, and a Reader in Computer Vision and Machine Learning at the University of Surrey. His recent work on explainability (with Sandra Wachter and Brent Mittelstadt of the OII) is cited in the guidelines to the GDPR and forms part of the TensorFlow “What-if tool”. He was one of the first researchers to propose the use of causal models in reasoning about fairness (namely Counterfactual Fairness), and continues to work extensively on computer vision, where he has won best paper award in two of the largest conferences BMVC and ECCV.