Warning: explode() expects parameter 2 to be string, array given in /h/cnswww-cur.aa/cur.aa.ufl.edu/htdocs/wp-content/plugins/wp-user-frontend/includes/fields/class-field-checkbox.php on line 184
Warning: Invalid argument supplied for foreach() in /h/cnswww-cur.aa/cur.aa.ufl.edu/htdocs/wp-content/plugins/wp-user-frontend/includes/fields/class-field-checkbox.php on line 187
Isabella Montoya-Bedoya
Mentor
Founder & Speaker
College
Add Your Heading Text Here
Major
Add Your Heading Text Here
Minor
Add Your Heading Text Here
Organizations
Add Your Heading Text Here
Academic Awards
Add Your Heading Text Here
Volunteering
Add Your Heading Text Here
Research Interests
Add Your Heading Text Here
Hobbies and Interests
Add Your Heading Text Here
Research Project
Mr. Francis Rozario
Eu consequat ac felis donec et odio pellentesque. Diam volutpat commodo sed egestas egestas fringilla phasellus faucibus. Scelerisque eleifend donec pretium vulputate. Eu consequat ac felis donec et odio pellentesque. Diam volutpat commodo sed egestas egestas fringilla phasellus faucibus. Scelerisque eleifend donec pretium vulputate.
Dr. Amanda Phalin
Information Systems
Portuguese
Artificial Intelligence, Augmented Reality, Machine Learning Models, Cloud Technology, Cybersecurity in Business
Foreign Language and Area Studies Fellowship 2021-2022
Latin American Women in Business, Heavener International Case Team, Women of Warrington, Florida Leadership Academy Advisory Board, University Minority Mentor Program
N/A
Foreign Language Studies, Classical Music, Fictional Writing
Detection and Impact of Bias in Machine Learning Algorithms
This research project will analyze the correlation between machine learning models and algorithmic bias in artificial intelligence systems. I will identify operational factors that lead to discriminatory outcomes in algorithms and propose solutions to detect and mitigate bias. Algorithmic bias is defined as systematic and repeatable errors in computer systems that deliver unfair outcomes and privileges to one arbitrary group of users over another. This bias can arise from a number of factors, including programming errors, data accessibility, and programmer bias. Lack of updated data or training can lead systems to unintentionally discriminate against certain groups of people. Models also have the possibility of operating on data produced from biased intentions and inequities. Due to the relatively new discovery of algorithmic bias, it is imperative that consequences of these bias algorithms are analyzed to ensure that discriminatory practices are eliminated from business operations.