19/03/2026
Hot vacancies:

Analyst Biases in the AI Era: How to Avoid Wrong Decisions
15:00 UTC
Online
English
About the event
Business and system analysts make decisions under uncertainty, and that’s exactly where cognitive biases thrive: false causality, anchoring, confirmation, overconfidence, survivorship bias, and framing effects. AI can help reduce these traps, but it can also amplify them by producing “plausible answers” that match our assumptions and feel objectively correct. In this session, we will dissect real BA/SA scenarios where biased thinking turns into incorrect scope, weak metrics, and wasted delivery, and show practical, repeatable “bias-control” techniques, including AI-assisted checks that don’t turn into AI hype. You’ll leave with a compact toolkit of prompts and templates.
The event will be interesting for:
Business Analysts, System Analysts, Product Owners, UX Researchers, and Data Analysts who work with stakeholder assumptions, metrics, experiments, dashboards, or AI assistants, and want to reduce decision errors, scope drift, and “we delivered it, but nothing improved” outcomes. Particularly useful for specialists involved in Discovery, prioritization, estimation, metrics definition, and stakeholder workshops.

Topics and speakers

Alexander Malyarenko
Business and System Analyst at Andersen. Economist, Data Analyst
Analyst Biases in the AI Era: How to Avoid Wrong Decisions
About the topic
Agenda: 1. Why analysts are vulnerable to biases (and why AI changes the game); 2. Six bias traps in BA/SA work: fast cases and what-can-go-wrong patterns; 3. AI as a bias amplifier vs. bias counter-tool: what to do differently in prompts and analysis flow; 4. Bias-control toolkit (practical); 5. Q&A session.
About the speaker
Alexander Malyarenko is a Business and System Analyst at Andersen and an economist/data analyst with over 15 years of research experience in the macroeconomic and behavioral patterns behind decision-making. In IT projects, he focuses on how teams translate ambiguous stakeholder inputs into requirements, metrics, and delivery decisions and how cognitive biases quietly distort that translation. In recent work, Alexander has applied “bias-aware” analysis techniques and AI-assisted validation (neutral prompts, counter-hypothesis generation, and evidence checks) to reduce false assumptions during Discovery and prevent “confident but wrong” requirements.
Attend our free online meetup