Researchers devise approach to reduce biases in computer vision data sets

Image

American Journal of Computer Science and Information Technology is an open access, peer-reviewed journal that focuses on all aspects of Application Programming, Business Applications, Network Programming, System Administration, Web Systems, Data Structures and Algorithms, Operating Systems and beyond. Celebrating a successful publication history, Journal reiterates its commitment to promote systematic enquiry and explorations in various fields: Databases, Artificial Intelligence, Information Organization and Retrieval, Computer Architecture etc

ImageNet, which includes images of objects and landscapes as well as people, serves as a source of training data for researchers creating machine learning algorithms that classify images or recognize elements within them. ImageNet's unprecedented scale necessitated automated image collection and crowdsourced image annotation. While the database's person categories have rarely been used by the research community, the ImageNet team has been working to address biases and other concerns about images featuring people that are unintended consequences of ImageNet's construction.

Computer vision now works really well, which means it's being deployed all over the place in all kinds of contexts, said co-author Olga Russakovsky, an assistant professor of computer science at Princeton. "This means that now is the time for talking about what kind of impact it's having on the world and thinking about these kinds of fairness issues.

the ImageNet team systematically identified non-visual concepts and offensive categories, such as racial and sexual characterizations, among ImageNet's person categories and proposed removing them from the database. The researchers also designed a tool that allows users to specify and retrieve image sets of people that are balanced by age, gender expression or skin color -- with the goal of facilitating algorithms that more fairly classify people's faces and activities in images.

A recent art project called ImageNet Roulette brought increased attention to these concerns. The project, released in September 2019 as part of an art exhibition on image recognition systems, used images of people from ImageNet to train an artificial intelligence model that classified people in words based on a submitted image. Users could upload an image of themselves and retrieve a label based on this model. Many of the classifications were offensive or simply off-base.

Media Contact:
Desrina R
Assistant Editorial Manager
American Journal of Computer Science and Information Technology
Whatsapp No:  +1-504-608-2390
Email: computersci@scholarlypub.com