Speakers

Overview
This session explored how AI systems increasingly act as decision agents and what this implies for autonomy, responsibility, and decision quality. One perspective focused on when people accept automated decision-making, highlighting the tradeoff between friction reduction and loss of agency. A second perspective focused on “reflection machines” and human oversight, showing how targeted questions can reduce overreliance and increase deliberation in human–machine collaboration. A third perspective connected AI to predictive perception, illustrating how computational models are used to understand how the brain anticipates and interprets sensory information.

Key Points
• AI was discussed not only as prediction, but as a decision agent that can reduce effort and uncertainty while potentially reducing human agency.
• Evidence and discussion suggested that “algorithmic aversion” is not inevitable; acceptance depended on perceived benefits and perceived loss of control.
• A core challenge was overreliance: systems can be effective yet still lead users to defer too readily to recommendations.
• “Reflection machine” ideas highlighted how structured prompts can trigger critical reflection, calibration, and more autonomous decision-making.
• Open question: under what conditions are people willing to give up agency in exchange for reduced friction (effort, uncertainty, time), especially when real consequences are involved?


Next Steps
Possible collaborations: designing and testing decision-support interfaces that balance friction reduction with meaningful human control, and evaluating whether reflection prompts change reliance, satisfaction, and decision autonomy.
If you are interested in AI and decision-making and would like to connect with the speakers or suggest a speaker for another session, contact Catalina.