A new initiative led by College of Toronto researcher Parham Aarabi aims to evaluate biases existing in artificial intelligence units as a initially step toward fixing them.
AI units generally mirror biases that are existing in the datasets – or, often, the AI’s modelling can introduce new biases.
“Every AI technique has some variety of a bias,” suggests Aarabi, an affiliate professor of communications/personal computer engineering in the Edward S. Rogers Sr. department of electrical and personal computer engineering in the College of Used Science & Engineering. “I say that as a person who has worked on AI units and algorithms for over 20 years.”
Aarabi is among the the tutorial and industry gurus in the College of Toronto’s HALT AI group, which assessments other organizations’ AI units working with various input sets. HALT AI creates a diversity report – including a