Bitya Neuhof & Michal Moshkovitz

Using Explainable Machine Learning – Gaps Between Academia and Industry

PhD student in Statistics and Data Science – Hebrew University.
Research Scientist at Bosch Center for AI.

Bitya Neuhof & Michal Moshkovitz

Using Explainable Machine Learning – Gaps Between Academia and Industry

PhD student in Statistics and Data Science – Hebrew University.
Research Scientist at Bosch Center for AI.

Bio

Michal is a research scientist at Bosch Center for AI and a visiting researcher at Tel Aviv University, hosted by Yishay Mansour. Previously, she was a postdoctoral fellow at the Qualcomm Institute at the University of California San Diego and a postdoc at Tel Aviv University.  Her interests lie in the foundations of AI, and in the last three years she has been focused on developing the mathematical foundations of explainable machine learning.

Michal received her PhD from the Hebrew University and an MSc from TelAviv University.  During her PhD, Michal interned at the Machine Learning for Healthcare and Life Sciences group of IBM Research and the Foundations of Machine Learning group of Google. Michal has been selected as a 2021 EECS MIT Rising Star, the recipient of the Anita Borg scholarship from Google and the Hoffman scholarship from the Hebrew University.

Bitya is a PhD student in Statistics and Data Science at the Hebrew University, exploring and developing explainable AI methods. Before her PhD she worked as a data scientist, specializing in analyzing high-dimensional tabular data. Bitya is also a core team member at Baot, the largest Israeli community of experienced women in.

Bio

Michal is a research scientist at Bosch Center for AI and a visiting researcher at Tel Aviv University, hosted by Yishay Mansour. Previously, she was a postdoctoral fellow at the Qualcomm Institute at the University of California San Diego and a postdoc at Tel Aviv University.  Her interests lie in the foundations of AI, and in the last three years she has been focused on developing the mathematical foundations of explainable machine learning.

Michal received her PhD from the Hebrew University and an MSc from TelAviv University.  During her PhD, Michal interned at the Machine Learning for Healthcare and Life Sciences group of IBM Research and the Foundations of Machine Learning group of Google. Michal has been selected as a 2021 EECS MIT Rising Star, the recipient of the Anita Borg scholarship from Google and the Hoffman scholarship from the Hebrew University.

Bitya is a PhD student in Statistics and Data Science at the Hebrew University, exploring and developing explainable AI methods. Before her PhD she worked as a data scientist, specializing in analyzing high-dimensional tabular data. Bitya is also a core team member at Baot, the largest Israeli community of experienced women in.

Abstract

Explainable AI (XAI) is a rapidly growing field that focuses on making artificial intelligence (AI) systems more transparent, interpretable, and understandable to humans. It is becoming increasingly important as AI systems are being used in more decision-making processes that affect people’s lives, such as hiring decisions, loan approvals, and healthcare recommendations. There are many different stakeholders interested in XAI, including individuals who may be affected by the decisions made by AI systems, policymakers, and organizations that develop and use AI.

 

Despite the importance of explainable AI, there is currently a gap between the research being done within academic contexts and the needs of industry. This gap can be attributed to a variety of factors, including the complexity of real-world AI systems and the challenges of deploying explainable AI in production environments. Bridging this gap will require collaboration between researchers and industry practitioners, as well as the development of new explainability methods that are effective in real-world settings.

 

In this roundtable discussion, participants will share their perspectives and experiences with explainability methods and will explore the gap between industry and academia in the field of explainability, focusing on the challenges and opportunities presented by this divide.

Abstract

Explainable AI (XAI) is a rapidly growing field that focuses on making artificial intelligence (AI) systems more transparent, interpretable, and understandable to humans. It is becoming increasingly important as AI systems are being used in more decision-making processes that affect people’s lives, such as hiring decisions, loan approvals, and healthcare recommendations. There are many different stakeholders interested in XAI, including individuals who may be affected by the decisions made by AI systems, policymakers, and organizations that develop and use AI.

 

Despite the importance of explainable AI, there is currently a gap between the research being done within academic contexts and the needs of industry. This gap can be attributed to a variety of factors, including the complexity of real-world AI systems and the challenges of deploying explainable AI in production environments. Bridging this gap will require collaboration between researchers and industry practitioners, as well as the development of new explainability methods that are effective in real-world settings.

 

In this roundtable discussion, participants will share their perspectives and experiences with explainability methods and will explore the gap between industry and academia in the field of explainability, focusing on the challenges and opportunities presented by this divide.

Discussion Points

  • Data scientist role definitions – full stack data scientists vs. specialisations
  • Pure data science teams vs embedded teams
  • Data science reporting lines
  • Professional and personal development in embedded teams

Discussion Points

  • Data scientist role definitions – full stack data scientists vs. specialisations
  • Pure data science teams vs embedded teams
  • Data science reporting lines
  • Professional and personal development in embedded teams

Planned Agenda

8:45 Reception
9:30 Opening words by WiDS TLV ambassador Nitzan Gado and by Lily Ben Ami, CEO of the Michal Sela Forum
9:50 Prof. Bracha Shapira – Data Challenges in Recommender Systems Research: Insights from Bundle Recommendation
10:20 Juan Liu – Accounting Automation: Making Accounting Easier So That People Can Forget About It
10:50 Break
11:00 Lightning talks
12:20 Lunch & poster session
13:20 Roundtable session & poster session
14:05 Roundtable closure
14:20 Break
14:30 Merav Mofaz – “Every Breath You Take and Every Move You Make…I'll Be Watching You:” The Sensitive Side of Smartwatches
14:50 Reut Yaniv – Ad Serving in the Online Geo Space Along Routes
15:10 Rachel Wities - It’s Not Just the Doctor’s Handwriting: Challenges and Opportunities in Healthcare NLP
15:30 Closing remarks
15:40 End