Skip to content

Artificial Intelligence Predictions on Student Success in Colleges Often Prove Inaccurate

AI models often fail to accurately forecast achievement among Black and Hispanic students, according to recent studies.

Artificial Intelligence in Colleges Predicts Student Success, Yet Accuracy Remains Questionable
Artificial Intelligence in Colleges Predicts Student Success, Yet Accuracy Remains Questionable

Artificial Intelligence Predictions on Student Success in Colleges Often Prove Inaccurate

In a groundbreaking study published in July 2021 in AERA Open, researcher Denisa Gándara, an assistant professor at the University of Texas at Austin, has revealed that AI algorithms used to predict college student success can exhibit a significant bias against Black and Hispanic students.

The study, which analysed data from 15,244 students over a decade, found that these predictive AI technologies are more likely to incorrectly predict failure for Black and Hispanic students than white students. Specifically, the models studied inaccurately predicted a student wouldn't graduate 12% of the time if the student was white, 6% of the time if the student was Asian, 21% of the time if the student was Hispanic, and 19% of the time if the student was Black.

This bias arises primarily due to the human-based training data and algorithm design, which may reflect existing societal inequalities and disparities in educational opportunities. The study suggests that predictive AI technologies could exacerbate existing social inequities, influencing crucial decisions like admissions and the allocation of student support services.

The study tested four predictive machine-learning tools commonly used in higher education. The tools were trained using 80% of the data and tested with the remaining 20%. The researchers used a large, nationally representative dataset that includes variables commonly used to predict college student success.

The key findings of the study highlight the need for regular audits, diverse development teams, bias impact statements, and increased human oversight to detect and address discriminatory patterns in AI predictions. Equitable access to AI tools and digital literacy is essential to ensure Black and Hispanic students can fully benefit from AI-enabled educational support without worsening existing digital divides.

Moreover, the study emphasizes the responsibility of educational institutions to actively monitor and mitigate AI bias to avoid reinforcing educational inequities for Black and Hispanic students while leveraging AI tools to enhance student success broadly.

It's important to note that the use of predictive models is not confined to colleges and universities - they are also widely used in K-12 schools. There is a risk of educational tracking, where students from racially minoritized groups are steered toward less challenging educational trajectories.

This research was inspired by research from other fields such as healthcare and criminal justice that had previously revealed a bias against socially marginalized groups. Colleges should implement comprehensive strategies involving bias identification and mitigation training for stakeholders to foster fairer AI use in recruitment, advising, placement, and success prediction processes.

Schools and colleges typically use smaller administrative datasets that include data on their own students. There is wide variability in the quality and quantity of data used at the institution level. As such, the use of nationally representative datasets in the study provides valuable insights into the potential biases in AI predictions that could be present in educational institutions across the United States.

In conclusion, the study underscores the need for educational institutions to be vigilant in addressing AI bias and to ensure that AI technologies are used to promote equity and fairness in education rather than perpetuating existing disparities.

Read also:

Latest