AI Assistance Does Not Worsen Assessments of Bias in Health Research
Reviewers assisted by the open-access Robot Reviewer platform, which uses machine learning and natural-language processing to partially automate assessment of potential bias in health-research papers, fared no worse than human reviewers working alone, according to a new report.
@mina-s Everything' is the key word here. The increasingly rapid rate of evidence production offers the opportunity to know more, but it's also a challenge in collecting all that knowledge in a reliable way. Automation offers one solution to that problem, but many are justifiably concerned it would bias the science produced by human judgement, making systematic-review conclusions less reliable instead of more reliable