Prestigious Alan Turing Institute Award for 2022 won by Koorosh Aslansefat, Researcher in SESAME


The Alan Turing Institute has recognised cutting-edge research on Artificial Intelligence done in SESAME. Koorosh Aslansefat - researcher in SESAME has received The Alan Turing Institute Award for Postdoctoral Enrichment in 2022 for his work on Safety of Machine Learning. The award provides financial and other incentives to help Koorosh advance his important work on the safety of Artificial Intelligence. The annual award recognises the remarkable achievements of students in data science and AI research as well as their potential, particularly those who will embrace the opportunity to enrich their own research through the Turing environment. The Alan Turing Institute is the national institute for data science and artificial intelligence, with headquarters at the British Library. The Institute is named in honour of Alan Turing whose pioneering work in theoretical and applied mathematics, engineering and computing are considered to be the key disciplines comprising the fields of data science and artificial intelligence.

Explaining the work of the Dependable Intelligent Systems research group that led to this award, Professor Papadopoulos said: "Many experts predicting the technological future of humanity claim that Artificial Intelligence poses existential risks for humanity. In the extreme, super-intelligent machines could threaten humanity or the stability of the economy. In simpler scenarios, autonomous cars may fail to correctly read traffic lights or traffic signs in adverse environmental conditions, and systems that rely on face recognition could discriminate against individuals or groups."


Koorosh has recently made a breakthrough in this area developing a technology called Safety of Machine Learning (SafeML). Machine learning algorithms can learn by analysing large volumes of data. For example, an algorithm may learn to detect a disease by examining images of typical symptoms of this disease. The accuracy of the reasoning of the algorithm depends on these data. Thus, the algorithm may be biased towards these typical symptoms and may give false negatives when encountering atypical symptoms of the disease. In another scenario, an algorithm may have been trained to detect features on a human face by examining pictures of a certain ethnic group. It may fail to detect similar features when it encounters images of people of different ethnic origins.


SafeML uses statical techniques to detect and report on such biases. This makes machine learning more accurate, just and safe which is a significant contribution to technology and humanity. This work has been carried out under the SESAME project.

Featured Posts
Recent Posts
Archive
Search By Tags
Follow Us
  • Facebook Basic Square
  • Twitter Basic Square
  • Google+ Basic Square