How We Calculate Risk π€
Understand the data and methodology powering our predictions.
A robot took your ad!
Ads keep this free. Please whitelist replacedbyrobot.info!
Our Opinionated Methodology (2026 Update)
Automation has evolved radically beyond simple "computerization". Today, we face a dual challenge: Physical automation (Robotics) and Cognitive automation (AI/LLMs). We completely rebuilt our data pipeline using the latest U.S. Bureau of Labor Statistics and O*NET datasets, and we applied a transparent, highly opinionated formula to calculate two distinct metrics: AI Risk and Robotic Risk.
π§ Calculating AI & Cognitive Risk
We evaluate an occupation's core tasks (e.g., gathering information, processing data) against its human defense factors. Skills like "Social Perceptiveness", "Originality", and "Complex Problem Solving" actively reduce AI risk. By calculating the base AI risk and neutralizing it with 60% of the defense score, we get a baseline (AI Risk = Base Risk - (Defense * 0.6)).
However, because of the explosive, disruptive capabilities of modern Large Language Models (LLMs) and Generative AI, we manually apply a 20% multiplier (1.2x) to all AI risk scores. We firmly believe past academic studies have severely underestimated the speed of cognitive automation.
π€ Calculating Physical Robotics Risk
For physical risk, we average O*NET metrics related to manual dexterity, repetitive motions, and machine control. Similar to AI, we hold an aggressive stance on hardware progress: we apply a 10% multiplier (1.1x) to the baseline robotics risk, anticipating massive leaps in humanoid robotics and embodied AI over the next decade.
π‘ Both probabilities are mathematically capped between 1% and 99%, as we believe no job is ever absolutely 0% immune or 100% guaranteed to be fully automated.
Historical Context: The Oxford Martin Study (2013)
In September 2013, Carl Benedikt Frey and Michael Osborne of the Oxford Martin School released the landmark paper: "The Future of Employment: How susceptible are jobs to computerisation?".
Their initial research estimated that about 47% of all U.S. jobs were at high risk of computerization. This was the foundational spark for this site β but the world has changed dramatically since 2013. Our 2026 rebuild uses the BLS Occupational Employment Statistics and O*NET Work Context data as the primary foundation, with the Oxford paper serving as a historical reference point only.
Why this site? π€·
Reading complex academic papers and cross-referencing government datasets? Ain't nobody got time for that.
That is exactly why I β Fabian Beiner β created this tool as a fast weekend hack back in 2015. In 2026, we completely rebuilt the data pipeline using fresh O*NET and BLS data, replacing the original Oxford estimates with a dual AI + Robotics risk score for every occupation.
Data Sources & Licensing ποΈ
Our predictions are built entirely on top of powerful, publicly accessible datasets. We are deeply grateful to the institutions maintaining them:
-
β’
O*NET OnLine (USDOL/ETA) This site incorporates information from O*NET OnLine, sponsored by the U.S. Department of Labor, Employment and Training Administration (USDOL/ETA). O*NET is a trademark of USDOL/ETA. Data used is licensed under the CC BY 4.0 license.
-
β’
U.S. Bureau of Labor Statistics (BLS) Wage and employment density data is provided by the Occupational Employment and Wage Statistics (OEWS) program. BLS data is in the Public Domain and may be used freely.
-
β’
Oxford Martin School (2013) Historical baseline automation risk percentages are derived from the foundational academic paper "The Future of Employment: How susceptible are jobs to computerisation?" by Carl Benedikt Frey and Michael A. Osborne.
-
β’
displacement.ai (2026) Additional AI and Robotics risk scores and analysis data are sourced from the AI Job Displacement Risk Index provided by displacement.ai. This data is licensed under the CC BY 4.0 license.
-
β’
Robot Brand Logo Our robot logo uses assets designed by Freepik. We are grateful for their high-quality vector art.