RE: references added to AI wiki


To the RQTF

Apologies, it looks like the attachments didn’t go through to the list. The references are in the wiki and I’ve included the bibtext references here.  References follow.


@article{RN5,
   author = {Abdusalomov, Akmalbek Bobomirzaevich and Mukhiddinov, Mukhriddin and Kutlimuratov, Alpamis and Whangbo, Taeg Keun},
   title = {Improved Real-Time Fire Warning System Based on Advanced Technologies for Visually Impaired People},
   journal = {Sensors},
   volume = {22},
   number = {19},
   pages = {7305},
   ISSN = {1424-8220},
   DOI = {10.3390/s22197305},
   url = {https://dx.doi.org/10.3390/s22197305},

   year = {2022},
   type = {Journal Article}
}

@article{RN2,
   author = {Acosta-Vargas, Patricia and Salvador-Acosta, Belén and Novillo-Villegas, Sylvia and Sarantis, Demetrios and Salvador-Ullauri, Luis},
   title = {Generative Artificial Intelligence and Web Accessibility: Towards an Inclusive and Sustainable Future},
   journal = {Emerging Science Journal},
   volume = {8},
   number = {4},
   pages = {1602-1621},
   abstract = {This study examines the accessibility of Generative Artificial Intelligence (AI) tools for people with disabilities, using WCAG 2.2 success criteria as a reference. Significant accessibility issues were identified in the evaluated applications, highlighting barriers mainly affecting disabled users. Integrating accessibility considerations from the beginning of application development and adopting a proactive approach are emphasized. Although challenges are faced, such as the shortage of inclusive training data and opacity in AI decision-making, the need to continue addressing various aspects of accessibility in the field of generative AI tools is acknowledged. These efforts are based on regulatory compliance and ethical principles to ensure equal societal participation, regardless of individual abilities. The fundamental role of accessibility in realizing this vision is highlighted, aligning with the United Nations Sustainable Development Goals, particularly those related to equality, education, innovation, and inclusion. Improving accessibility meets regulatory requirements and contributes to a broader global agenda for a more equitable and sustainable future.<p> </p><p><strong>Doi:</strong> <a href="https://doi.org/10.28991/ESJ-2024-08-04-021">10.28991/ESJ-2024-08-04-021</a></p><p><strong> Full Text:</strong> <a href="/index.php/ESJ/article/view/2399/pdf">PDF</a></p>},
   DOI = {10.28991/ESJ-2024-08-04-021},
   url = {https://ijournalse.org/index.php/ESJ/article/view/2399},

   year = {2024},
   type = {Journal Article}
}

@article{RN20,
   author = {Akter, Taslima and Ahmed, Tousif and Kapadia, Apu and Swaminathan, Manohar},
   title = {Shared Privacy Concerns of the Visually Impaired and Sighted Bystanders with Camera-Based Assistive Technologies},
   journal = {ACM Trans. Access. Comput.},
   volume = {15},
   number = {2},
   pages = {Article 11},
   keywords = {Privacy, visually impaired, augmented reality, AI ethics, fairness and bias},
   ISSN = {1936-7228},
   DOI = {10.1145/3506857},
   url = {https://doi.org/10.1145/3506857},

   year = {2022},
   type = {Journal Article}
}

@article{RN45,
   author = {Atf, Zahra and and Lewis, Peter R.},
   title = {Towards inclusive explainable artificial intelligence: a thematic analysis and scoping review on tools for persons with disabilities},
   journal = {Disability and Rehabilitation: Assistive Technology},
   pages = {1-22},
   note = {doi: 10.1080/17483107.2025.2507684},
   ISSN = {1748-3107},
   DOI = {10.1080/17483107.2025.2507684},
   url = {https://doi.org/10.1080/17483107.2025.2507684},

   year = {2025},
   type = {Journal Article}
}

@article{RN44,
   author = {Beck Wells, Melissa},
   title = {Disability services in higher education: Statistical disparities and the potential role of AI in bridging institutional gaps},
   journal = {PLOS One},
   volume = {20},
   number = {5},
   pages = {e0322728},
   ISSN = {1932-6203},
   DOI = {10.1371/journal.pone.0322728},
   url = {https://dx.doi.org/10.1371/journal.pone.0322728},

   year = {2025},
   type = {Journal Article}
}

@inproceedings{RN9,
   author = {C, C. and Chennamma and V, R. and M. K. M, V. and P. B, S. and S. H, R. and Thomas, L. and S. D. S, L.},
   title = {Image/Video Summarization in Text/Speech for Visually Impaired People},
   booktitle = {2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon)},
   pages = {1-6},
   DOI = {10.1109/MysuruCon55714.2022.9972653},
   type = {Conference Proceedings}
}

@inproceedings{RN43,
   author = {C, R. and Babu, K. S. and Ranjith, K. S. and Sinha, A. K. and Neerugatti, V. and Reddy, D. S.},
   title = {Enhancing E-learning Accessibility through AI(Artificial Intelligence) and Inclusive Design},
   booktitle = {2025 6th International Conference on Mobile Computing and Sustainable Informatics (ICMCSI)},
   pages = {1466-1471},
   DOI = {10.1109/ICMCSI64620.2025.10883148},
   type = {Conference Proceedings}
}

@article{RN35,
   author = {Chemnad, Khansa and Othman, Achraf},
   title = {Digital accessibility in the era of artificial intelligence—Bibliometric analysis and systematic review},
   journal = {Frontiers in Artificial Intelligence},
   volume = {Volume 7 - 2024},
   abstract = {<sec><title>Introduction</title><p>Digital accessibility involves designing digital systems and services to enable access for individuals, including those with disabilities, including visual, auditory, motor, or cognitive impairments. Artificial intelligence (AI) has the potential to enhance accessibility for people with disabilities and improve their overall quality of life.</p></sec><sec><title>Methods</title><p>This systematic review, covering academic articles from 2018 to 2023, focuses on AI applications for digital accessibility. Initially, 3,706 articles were screened from five scholarly databases—ACM Digital Library, IEEE Xplore, ScienceDirect, Scopus, and Springer.</p></sec><sec><title>Results</title><p>The analysis narrowed down to 43 articles, presenting a classification framework based on applications, challenges, AI methodologies, and accessibility standards.</p></sec><sec><title>Discussion</title><p>This research emphasizes the predominant focus on AI-driven digital accessibility for visual impairments, revealing a critical gap in addressing speech and hearing impairments, autism spectrum disorder, neurological disorders, and motor impairments. This highlights the need for a more balanced research distribution to ensure equitable support for all communities with disabilities. The study also pointed out a lack of adherence to accessibility standards in existing systems, stressing the urgency for a fundamental shift in designing solutions for people with disabilities. Overall, this research underscores the vital role of accessible AI in preventing exclusion and discrimination, urging a comprehensive approach to digital accessibility to cater to diverse disability needs.</p></sec>},
   keywords = {digital accessibility,artificial intelligence,AI,Research analysis,Systematic review,Persons with Disabilities},
   ISSN = {2624-8212},
   DOI = {10.3389/frai.2024.1349668},
   url = {https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2024.1349668},

   year = {2024},
   type = {Journal Article}
}

@article{RN14,
   author = {Cimolino, Gabriele and Askari, Sussan and Graham, T.C. Nicholas},
   title = {The Role of Partial Automation in Increasing the Accessibility of Digital Games},
   journal = {Proc. ACM Hum.-Comput. Interact.},
   volume = {5},
   number = {CHI PLAY},
   pages = {Article 266},
   keywords = {artificial intelligence, automation, game accessibility, personalization, shared control},
   DOI = {10.1145/3474693},
   url = {https://doi.org/10.1145/3474693},

   year = {2021},
   type = {Journal Article}
}

@article{RN3,
   author = {Dash, Samir},
   title = {AI-Powered Real-time Accessibility Enhancement: A Solution for Web Content Accessibility Issues},
   journal = {JOIN (Jurnal Online Informatika (Online)},
   volume = {9},
   number = {1},
   pages = {80-88},
   abstract = {The web accessibility landscape is a significant challenge, with 96.3% of home pages displaying issues with Web Content Accessibility Guidelines (WCAG). This paper addresses the primary accessibility issues, such as missing Accessible Rich Internet Applications (ARIA) landmarks, ill-formed headings, low contrast text, and inadequate form labeling. The dynamic nature of modern web and cloud applications presents challenges, such as developers' limited awareness of accessibility implications, potential code bugs, and API failures. To address these issues, an AI-enabled system is proposed to dynamically enhance web accessibility. The system uses machine learning algorithms to identify and rectify accessibility issues in real-time, integrating with existing development workflows. Empirical evaluation and case studies demonstrate the efficacy of this solution in improving web accessibility across diverse scenarios.},
   keywords = {Artificial intelligence
Semantic Web},
   ISSN = {2528-1682},
   DOI = {10.15575/join.v9i1.1310},
   year = {2024},
   type = {Journal Article}
}

@misc{RN12,
   author = {Duarte, Carlos and Pereira, Letícia Seixas and Santos, André and Vicente, João and Rodrigues, André and Guerreiro, João and Coelho, José and Guerreiro, Tiago},
   title = {Nipping Inaccessibility in the Bud: Opportunities and Challenges of Accessible Media Content Authoring},
   publisher = {Association for Computing Machinery},
   pages = {3–9},
   keywords = {accessibility, social media, user-generated content, visual content},
   DOI = {10.1145/3462741.3466644},
   url = {https://doi.org/10.1145/3462741.3466644},

   year = {2021},
   type = {Conference Paper}
}

@inproceedings{RN18,
   author = {Glazko, Kate S and Yamagami, Momona and Desai, Aashaka and Mack, Kelly Avery and Potluri, Venkatesh and Xu, Xuhai and Mankoff, Jennifer},
   title = {An autoethnographic case study of generative artificial intelligence's utility for accessibility},
   booktitle = {Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility},
   pages = {1-8},
   type = {Conference Proceedings}
}

@article{RN37,
   author = {Goldenthal, Emma and Park, Jennifer and Liu, Sunny X. and Mieczkowski, Hannah and Hancock, Jeffrey T.},
   title = {Not All AI are Equal: Exploring the Accessibility of AI-Mediated Communication Technology},
   journal = {Computers in Human Behavior},
   volume = {125},
   pages = {106975},
   abstract = {While AI technologies and tools offer various potential benefits to their users, it is not clear whether opportunities to access these benefits are equally accessible to all. We examine this gap between availability and accessibility as it relates to the adoption of AI-Mediated Communication (AI-MC) tools, which enable interpersonal communication where an intelligent agent operates on behalf of a communicator. Upon defining six functional AI-MC types (voice-assisted communication, language correction, predictive text suggestion, transcription, translation, personalized language learning) we conducted an online survey of 519 U.S. participants that combined closed- and open-ended measures. Our quantitative results revealed how AI-MC adoption is related to software, device, and internet access for tools such as voice-assisted communication; demographic factors such as age, education and income in the case of translation and transcription tools; and some components of AI-MC literacy for specific functional tools. Our qualitative analyses provide additional nuance for these findings, and we articulate a number of barriers to access, understanding, and usage of AI-MC tools, which we suggest hinder AI-MC accessibility for user groups traditionally disadvantaged by one-size-fits-all technological tools. We end with a call for broadly addressing accessibility concerns within the digital technology industry.},
   keywords = {Artificial intelligence
Digital access
Digital literacy
AI-Mediated Communication
Socioeconomic factors},
   ISSN = {0747-5632},
   DOI = {https://doi.org/10.1016/j.chb.2021.106975},

   url = {https://www.sciencedirect.com/science/article/pii/S0747563221002983},

   year = {2021},
   type = {Journal Article}
}

@article{RN13,
   author = {Guo, Zichun and Wang, Zihao and Jin, Xueguang},
   title = {“Avatar to Person”(ATP) virtual human social ability enhanced system for disabled people},
   journal = {Wireless Communications and Mobile Computing},
   volume = {2021},
   number = {1},
   pages = {5098992},
   ISSN = {1530-8677},
   year = {2021},
   type = {Journal Article}
}

@inbook{RN46,
   author = {Gupta, Abhishek and Treviranus, Jutta and Vartiainen, Matti and Bus, Jacques and Schaffers, Hans},
   title = {Inclusively Designed Artificial Intelligence},
   publisher = {Routledge},
   address = {United Kingdom},
   edition = {1},
   pages = {89-110},
   abstract = {Artificial intelligence (AI) can either automate and amplify existing biases, or provide new opportunities for previously marginalized individuals and groups. Small minorities and outliers are frequently excluded or misrepresented in population data sets. Even if their data is included, data-driven decisions favor the statistical average, thereby disadvantaging small minorities. Small minorities and people at the margins are also most vulnerable to data abuse and misuse. Current privacy protections are ineffective if you are an outlier or in some way anomalous. This chapter will discuss the challenges, dangers, and opportunities of machine learning and AI for individuals and groups that are not represented by the majority.},
   ISBN = {8770222207},
   DOI = {10.1201/9781003337928-5},
   year = {2020},
   type = {Book Section}
}

@article{RN24,
   author = {Harum, Norharyati Binti and M. S. K, Nur’aliah Izzati and Emran, Nurul Akmar and Abdullah, Noraswaliza and Zakaria, Nurul Azma and Hamid, Erman and Anawar, Syarulnaziah},
   title = {A Development of Multi-Language Interactive Device using Artificial Intelligence Technology for Visual Impairment Person},
   journal = {International Journal of Interactive Mobile Technologies (iJIM)},
   volume = {15},
   number = {19},
   pages = {pp. 79-92},
   abstract = {&lt;p class=&quot;0abstract&quot;&gt;The issue of lacking reference books in braille in most public building is crucial, especially public places like libraries, museum and others. The visual impairment or blind people is not getting the information like we normal vision do. Therefore, a multi languages reading device for visually impaired is built and designed to overcome the limitation of reference books in public places. Some research regarding current product available is done to develop a better reading device. This reading device is an improvement from previous project which only focuses on single language which is not suitable for public places. This reading device will take a picture of the book using 5MP Pi camera, Google Vision API will extract the text, and Google Translation API will detect the language and translated to desired language based on push buttons input by user. Google Text-to-Speech will convert the text to speech and the device will read out aloud in through audio output like speaker or headphones.   A few testings have been made to test the functionality and accuracy of the reading device. The testings are functionality, performance test and usability test. The reading device passed most of the testing and get a score of 91.7/100 which is an excellent (A) rating&lt;strong&gt;.&lt;/strong&gt;&lt;/p&gt;},
   DOI = {10.3991/ijim.v15i19.24139},
   url = {https://online-journals.org/index.php/i-jim/article/view/24139},

   year = {2021},
   type = {Journal Article}
}

@article{RN39,
   author = {Ilham, Gemiharto and Samson, C. M. S.},
   title = {Inclusivity and Accessibility in Digital Communication Tools: Case Study of AI-Enhanced Platforms in INDONESIA},
   journal = {Jurnal Pewarta Indonesia},
   volume = {6},
   number = {1},
   pages = {78-88},
   abstract = {In the dynamic digital communication landscape, ensuring inclusivity and accessibility remains a pivotal concern. This qualitative case study explores contemporary challenges and inventive solutions within AI-enhanced platforms to champion digital communication that is all-encompassing. This research uses a qualitative case study methodology for various AI-enhanced digital communication tools. Comprehensive data was collected through in-depth interviews, content analysis, and usability assessments involving participants, including individuals with disabilities, accessibility experts, and digital communication tool developers. The study reveals significant hurdles in achieving inclusivity and accessibility, encompassing accessibility disparities for individuals with disabilities, limited awareness of accessibility features, and inherent design biases. It also unveils forward-looking strategies like AI-driven assistive technologies, voice-activated interfaces, and inclusive design principles that hold the potential to revolutionize digital communication. These findings underscore the pivotal role of inclusive design in AI-enhanced digital communication platforms, emphasizing the necessity for heightened awareness and collaboration among developers, accessibility experts, and users with disabilities. This research underscores the promising role of AI in mitigating accessibility challenges and advancing inclusivity. In pursuing all-encompassing and accessible digital communication, this qualitative case study provides valuable insights into the prevailing difficulties and pioneering pathways within AI-enhanced platforms. It calls for unified efforts among stakeholders to leverage AI's capabilities to render digital communication tools more inclusive, fostering a more equitable online environment},
   DOI = {10.25008/jpi.v6i1.154},
   year = {2024},
   type = {Journal Article}
}

@article{RN32,
   author = {Ingavélez-Guerra, P. and Robles-Bykbaev, V. E. and Pérez-Muñoz, A. and Hilera-González, J. and Otón-Tortosa, S.},
   title = {Automatic Adaptation of Open Educational Resources: An Approach From a Multilevel Methodology Based on Students’ Preferences, Educational Special Needs, Artificial Intelligence and Accessibility Metadata},
   journal = {IEEE Access},
   volume = {10},
   pages = {9703-9716},
   ISSN = {2169-3536},
   DOI = {10.1109/ACCESS.2021.3139537},
   year = {2022},
   type = {Journal Article}
}

@article{RN25,
   author = {Joshi, Rakesh Chandra and Yadav, Saumya and Dutta, Malay Kishore and Travieso-Gonzalez, Carlos M.},
   title = {Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People},
   journal = {Entropy},
   volume = {22},
   number = {9},
   pages = {941},
   ISSN = {1099-4300},
   DOI = {10.3390/e22090941},
   url = {https://dx.doi.org/10.3390/e22090941},

   year = {2020},
   type = {Journal Article}
}

@article{RN31,
   author = {Karyono, Karyono and Abdullah, Badr and Cotgrave, Alison and Bras, Ana},
   title = {A Novel Adaptive Lighting System Which Considers Behavioral Adaptation Aspects for Visually Impaired People},
   journal = {Buildings},
   volume = {10},
   number = {9},
   pages = {168},
   ISSN = {2075-5309},
   DOI = {10.3390/buildings10090168},
   url = {https://dx.doi.org/10.3390/buildings10090168},

   year = {2020},
   type = {Journal Article}
}

@article{RN33,
   author = {Kose, Utku and Vasant, Pandian},
   title = {Better campus life for visually impaired University students: intelligent social walking system with beacon and assistive technologies},
   journal = {Wireless Networks},
   volume = {26},
   number = {7},
   pages = {4789-4803},
   abstract = {Objective of this study is to introduce a novel, low-cost intelligent social walking path support system for visually impaired students in a wide campus area, by employing beacons, optimization based Artificial Intelligence techniques, Big Data support, and a system rising over Internet of Things. In detail, the developed system has been used within two connected campus areas of Suleyman Demirel University located in the city of Isparta in Turkey and an effective walking path support was ensured for enabling visually students to go target locations with instructions given by an intelligent system. In this way, it is also aimed to enable students to experience a better campus life. The study done here is unique with its Artificial Intelligence oriented characteristics ensuring an intelligent navigation control and planning system by benefiting from only interactions among beacons and mobile devices as not requiring to use physical road bumps, so lowering costs by eliminating both physical components and advanced communication systems. Also, other students’ data over social media environments are used as Big Data to support effective decisions taken by the system. After real implementation of the system, too much positive feedback was obtained from visually impaired students.},
   ISSN = {1572-8196},
   DOI = {10.1007/s11276-018-1868-z},
   url = {https://doi.org/10.1007/s11276-018-1868-z},

   year = {2020},
   type = {Journal Article}
}

@article{RN7,
   author = {Li, Xinrong and Huang, Meiyu and Xu, Yao and Cao, Yingze and Lu, Yamei and Wang, Pengfei and Xiang, Xueshuang},
   title = {AviPer: assisting visually impaired people to perceive the world with visual-tactile multimodal attention network},
   journal = {CCF Transactions on Pervasive Computing and Interaction},
   volume = {4},
   number = {3},
   pages = {219-239},
   abstract = {Unlike able-bodied persons, it is difficult for visually impaired people, especially those in the educational age, to build a full perception of the world due to the lack of normal vision. The rapid development of AI and sensing technologies has provided new solutions to visually impaired assistance. However, to our knowledge, most previous studies focused on obstacle avoidance and environmental perception but paid less attention to educational assistance for visually impaired people. In this paper, we propose AviPer, a system that aims to assist visually impaired people to perceive the world via creating a continuous, immersive, and educational assisting pattern. Equipped with a self-developed flexible tactile glove and a webcam, AviPer can simultaneously predict the grasping object and provide voice feedback using the vision-tactile fusion classification model, when a visually impaired people is perceiving the object with his gloved hand. To achieve accurate multimodal classification, we creatively embed three attention mechanisms, namely temporal, channel-wise, and spatial attention in the model. Experimental results show that AviPer can achieve an accuracy of 99.75% in classification of 10 daily objects. We evaluated the system in a variety of extreme cases, which verified its robustness and demonstrated the necessity of visual and tactile modal fusion. We also conducted tests in the actual use scene and proved the usability and user-friendliness of the system. We opensourced the code and self-collected datasets in the hope of promoting research development and bringing changes to the lives of visually impaired people.},
   ISSN = {2524-5228},
   DOI = {10.1007/s42486-022-00108-3},
   url = {https://doi.org/10.1007/s42486-022-00108-3},

   year = {2022},
   type = {Journal Article}
}

@inbook{RN41,
   author = {Malviya, Rishabha and Rajput, Shivam},
   title = {Introduction to the Role of Artificial Intelligence in Disability Support},
   booktitle = {Advances and Insights into AI-Created Disability Supports},
   editor = {Malviya, Rishabha and Rajput, Shivam},
   publisher = {Springer Nature Singapore},
   address = {Singapore},
   pages = {1-23},
   abstract = {Artificial intelligence (AI) enables significant changes through which disabled individuals can achieve inclusion and obtain necessary resources more smoothly. AI-driven computer vision technology exhibits significant potential to aid those with impairments in vision, mobility, or voice. AI-powered solutions simplify everyday activities and improve personable capabilities through skill-enhancement and freedom-gaining features. AI technology in assistive tools improves societal acceptance which allows people with major physical challenges to maintain independent living. This chapter presents advanced AI solutions that transform daily life throughout different areas of human activity. AI technology used for cognitive support and object recognition as well as face recognition and speech recognition tools has enhanced overall usability for users. AI technology enables early disease identification combined with personalized medical interventions which improves healthcare delivery to patients. Restoring mobility to disabled people is now possible with the use of artificial intelligence (AI)-driven exoskeletons and devices. AI-based adaptive learning systems support educational accessibility because they enable students with disabilities to receive tailored personalized learning experiences at an open level. The improvements enabled by AI transformations redefine the concept of both independence and accessibility. The emerging world has the potential to fully integrate technology into daily life, which would give disabled people more freedom and opportunities to participate with others.},
   ISBN = {978-981-96-6069-8},
   DOI = {10.1007/978-981-96-6069-8_1},
   url = {https://doi.org/10.1007/978-981-96-6069-8_1},

   year = {2025},
   type = {Book Section}
}

@article{RN10,
   author = {Montanha, Aleksandro and M., Oprescu Andreea and and Romero-Ternero, MCarmen},
   title = {A Context-Aware Artificial Intelligence-based System to Support Street Crossings For Pedestrians with Visual Impairments},
   journal = {Applied Artificial Intelligence},
   volume = {36},
   number = {1},
   pages = {2062818},
   note = {doi: 10.1080/08839514.2022.2062818},
   ISSN = {0883-9514},
   DOI = {10.1080/08839514.2022.2062818},
   url = {https://doi.org/10.1080/08839514.2022.2062818},

   year = {2022},
   type = {Journal Article}
}

@article{RN36,
   author = {Morris, Meredith Ringel},
   title = {AI and accessibility},
   journal = {Commun. ACM},
   volume = {63},
   number = {6},
   pages = {35–37},
   ISSN = {0001-0782},
   DOI = {10.1145/3356727},
   url = {https://doi.org/10.1145/3356727},

   year = {2020},
   type = {Journal Article}
}

@inbook{RN40,
   author = {Ntoa, Stavroula and Margetis, George and Antona, Margherita and Stephanidis, Constantine},
   title = {Digital Accessibility in Intelligent Environments},
   booktitle = {Human-Automation Interaction: Manufacturing, Services and User Experience},
   editor = {Duffy, Vincent G. and Lehto, Mark and Yih, Yuehwern and Proctor, Robert W.},
   publisher = {Springer International Publishing},
   address = {Cham},
   pages = {453-475},
   abstract = {Intelligent everyday environments are expected to empower their inhabitants, assisting them in carrying out their everyday tasks, but also ensuring their well-being and prosperity. In this regard, the accessibility of an intelligent environment is of utmost importance to ensure that it fulfills user needs, but also that it is usable and useful for everyone, without imposing barriers or excluding individuals with disabilities or older adults. This chapter carries out a review of the state of the art in the field of interaction techniques in intelligent environments, analyzing their accessibility challenges and benefits for different user categories. Furthermore, toward the direction of universally accessible intelligent environments, the issue of multimodal interaction is discussed, summarizing the modalities that can be employed for each user group.},
   ISBN = {978-3-031-10780-1},
   DOI = {10.1007/978-3-031-10780-1_25},
   url = {https://doi.org/10.1007/978-3-031-10780-1_25},

   year = {2023},
   type = {Book Section}
}

@article{RN17,
   author = {Palmer, Zsuzsanna B. and Oswal, Sushil K.},
   title = {Constructing Websites with Generative AI Tools: The Accessibility of Their Workflows and Products for Users With Disabilities},
   journal = {Journal of Business and Technical Communication},
   volume = {39},
   number = {1},
   pages = {93-114},
   abstract = {Generative AI tools allow anyone without web-design experience to have a business website created when the user provides a few specifications about the business, such as its name, type, and location. But the resulting websites not only fall short of the business's basic needs but they also raise major concerns about their accessibility for disabled users. This study specifically examines whether these AI generated websites are accessible to screen-reader users with visual disabilities. It presents data about the usability and accessibility of the products of three generative AI website builders, highlights the specific problems found by an expert screen reader test along with an automated machine scan of these sites, and discusses some causes of and recommendations for solving these problems.},
   keywords = {web accessibility, generative AI website builders, AI training data, AI documentation},
   DOI = {10.1177/10506519241280644},
   url = {https://journals.sagepub.com/doi/abs/10.1177/10506519241280644},

   year = {2025},
   type = {Journal Article}
}

@inproceedings{RN15,
   author = {Park, Joon Sung and Bragg, Danielle and Kamar, Ece and Morris, Meredith Ringel},
   title = {Designing an online infrastructure for collecting AI data from people with disabilities},
   booktitle = {Proceedings of the 2021 ACM conference on fairness, accountability, and transparency},
   pages = {52-63},
   type = {Conference Proceedings}
}

@inproceedings{RN19,
   author = {Rajasekhar, N. and Panday, S.},
   title = {SiBo-The Sign Bot, Connected World for Disabled},
   booktitle = {2022 IEEE Women in Technology Conference (WINTECHCON)},
   pages = {1-6},
   DOI = {10.1109/WINTECHCON55229.2022.9832179},
   type = {Conference Proceedings}
}

@inproceedings{RN4,
   author = {Royal, Akula Bhargav and Sandeep, Balimidi Guru and Das, Bandi Mokshith and Bharath Raj Nayaka, A. M. and Joshi, Sujata},
   title = {VisionX—A Virtual Assistant for the Visually Impaired Using Deep Learning Models},
   booktitle = {Emerging Research in Computing, Information, Communication and Applications},
   editor = {Shetty, N. R. and Patnaik, L. M. and Prasad, N. H.},
   publisher = {Springer Nature Singapore},
   pages = {891-901},
   abstract = {We are living in world where there are many difficulties faced by various people. But, visual impairment is one of the biggest challenges faced by people. We can see visually impaired people using sticks or any other means for doing their tasks, but they can’t find or identify any object by themselves. In today’s world, there are many advancements in technology like artificial intelligence, machine learning where we can train machines for identifying different objects and different text present in a scene. The objective of this work is to develop a system using Esp32 Camera module to help the visually impaired to identify objects. The proposed system takes images from Esp32 and identifies different objects and text present in the scene and gives the output in the form of speech. The system has a voice assistant with commands for performing object detection, text detection and camera usage. This research helps visually impaired people in various aspects like object detection and text detection using voice commands and they can listen to the description of different objects detected by the system. The results show training accuracy of 97% and test accuracy of 88% using deep learning CNN models.},
   ISBN = {978-981-19-5482-5},
   type = {Conference Proceedings}
}

@article{RN34,
   author = {See, Aaron Raymond and Advincula, Welsey Daniel},
   title = {Creating Tactile Educational Materials for the Visually Impaired and Blind Students Using AI Cloud Computing},
   journal = {Applied Sciences},
   volume = {11},
   number = {16},
   pages = {7552},
   ISSN = {2076-3417},
   DOI = {10.3390/app11167552},
   url = {https://dx.doi.org/10.3390/app11167552},

   year = {2021},
   type = {Journal Article}
}

@article{RN38,
   author = {Soraya Hariyani Putri, Syifa Adiba},
   title = {AI and the Future of Library Accessibility: Making Digital Promotions Inclusive},
   journal = {Jurnal Ilmu Informasi Perpustakaan dan Kearsipan},
   volume = {26},
   number = {2},
   ISSN = {2502-7409},
   DOI = {10.7454/jipk.v26i2.1116},
   url = {https://dx.doi.org/10.7454/jipk.v26i2.1116},

   year = {2024},
   type = {Journal Article}
}

@article{RN28,
   author = {Sreemathy, R. and Turuk, Mousami and Kulkarni, Isha and Khurana, Soumya},
   title = {Sign language recognition using artificial intelligence},
   journal = {Education and Information Technologies},
   volume = {28},
   number = {5},
   pages = {5259-5278},
   abstract = {Sign language is the natural way of communication of speech and hearing-impaired people. Using Indian Sign Language (ISL) interpretation system, hearing impaired people may interact with normal people with the help of Human Computer Interaction (HCI). This paper presents a method for automatic recognition of two-handed signs of Indian Sign language (ISL). The three phases of this work include preprocessing, feature extraction and classification. We trained a BPN with Histogram Oriented Gradient (HOG) features. The trained model is used for testing the real time gestures. The overall accuracy achieved was 89.5% with 5184 input features and 50 hidden neurons. A deep learning approach was also implemented using AlexNet, GoogleNet, VGG-16 and VGG-19 which gave accuracies of 99.11%, 95.84%, 98.42% and 99.11% respectively. MATLAB is used as the simulation platform. The proposed technology is used as a teaching assistant for specially abled persons and has demonstrated an increase in cognitive ability of 60–70% in children. This system demonstrates image processing and machine learning approaches to recognize alphabets from the Indian sign language, which can be used as an ICT (information and communication technology) tool to enhance their cognitive capability.},
   ISSN = {1573-7608},
   DOI = {10.1007/s10639-022-11391-z},
   url = {https://doi.org/10.1007/s10639-022-11391-z},

   year = {2023},
   type = {Journal Article}
}

@inproceedings{RN16,
   author = {Theodorou, Lida and Massiceti, Daniela and Zintgraf, Luisa and Stumpf, Simone and Morrison, Cecily and Cutrell, Edward and Harris, Matthew Tobias and Hofmann, Katja},
   title = {Disability-first dataset creation: Lessons from constructing a dataset for teachable object recognition with blind and low vision data collectors},
   booktitle = {Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility},
   pages = {1-12},
   type = {Conference Proceedings}
}

@inbook{RN47,
   author = {Treviranus, Jutta and Holmes, Wayne and Porayska-Pomsta, Kaśka},
   title = {Learning to learn differently},
   publisher = {Routledge},
   edition = {1},
   pages = {25-46},
   abstract = {Our data-driven decision processes reduce diversity and complexity. Data analysis is dependent on large homogeneous data sets. This leads to bias against outliers and small minorities. Most Artificial Intelligence (AI) amplifies and automates this pattern. This worsens disparity and blind spots in education and research. Data is about the past; automated decisions based on data exacerbate past patterns. The disruption in education caused by the COVID-19 pandemic offers an opportunity to consider what it is we want AI to amplify and automate. Is this the trajectory we wish to accelerate using machine learning? How will this prepare students to navigate out of crises to come and the changes in society brought about by machine intelligence?},
   ISBN = {9780367349714},
   DOI = {10.4324/9780429329067-3},
   url = {https://www.taylorfrancis.com/chapters/edit/10.4324/9780429329067-3/learning-learn-differently-jutta-treviranus},

   year = {2023},
   type = {Book Section}
}

@article{RN42,
   author = {Trewin, Shari and Basson, Sara and Muller, Michael and Branham, Stacy and Treviranus, Jutta and Gruen, Daniel and Hebert, Daniel and Lyckowski, Natalia and Manser, Erich},
   title = {Considerations for AI fairness for people with disabilities},
   journal = {AI Matters},
   volume = {5},
   number = {3},
   pages = {40–63},
   DOI = {10.1145/3362077.3362086},
   url = {https://doi.org/10.1145/3362077.3362086},

   year = {2019},
   type = {Journal Article}
}

@article{RN27,
   author = {Ullah, Farman and Abuali, Najah Abed and Ullah, Asad and Ullah, Rehmat and Siddiqui, Uzma Abid and Siddiqui, Afsah Abid},
   title = {Fusion-Based Body-Worn IoT Sensor Platform for Gesture Recognition of Autism Spectrum Disorder Children},
   journal = {Sensors},
   volume = {23},
   number = {3},
   pages = {1672},
   ISSN = {1424-8220},
   DOI = {10.3390/s23031672},
   url = {https://dx.doi.org/10.3390/s23031672},

   year = {2023},
   type = {Journal Article}
}

@article{RN30,
   author = {Vieira, Alessandro Diogo and Leite, Higor and Volochtchuk, Ana Vitória Lachowski},
   title = {The impact of voice assistant home devices on people with disabilities: A longitudinal study},
   journal = {Technological Forecasting and Social Change},
   volume = {184},
   pages = {121961},
   abstract = {The impact of technological innovations in our lives has never been greater, for instance the use of assistive technology and artificial intelligence has resulted in smart devices, such as voice assistants (VA). However, empirical studies that understand the impact of VA technology on the individual and collective well-being of vulnerable people are still scarce. Thus, by conducting a series of longitudinal studies within the ecosystem of physically and visually impaired people, our study aims to respond to this void in the literature. Over a period of 30 weeks, we carried out 5 longitudinal case studies, and collected data from semi-structured interviews (n = 25), informal conversations (n = 23), observations (n = 25) and a focus group with participants (n = 8), as well as secondary data collected from the VA device reports. Our results identified themes related to the impact of technology on well-being, challenges and improvements, relationship building, privacy concerns and a gap between technology and inclusiveness. Furthermore, under the transformative service lens, we developed a framework that illustrates how interactions between people with disability and the VA technology co-design and co-create value for individual and collective well-being.},
   keywords = {Assistive technology
Google home
People with disability
Physical and visual impairments
Voice assistant
Vulnerable population},
   ISSN = {0040-1625},
   DOI = {https://doi.org/10.1016/j.techfore.2022.121961},

   url = {https://www.sciencedirect.com/science/article/pii/S0040162522004826},

   year = {2022},
   type = {Journal Article}
}

@inproceedings{RN6,
   author = {Wadhwa, V. and Gupta, B. and Gupta, S.},
   title = {AI Based Automated Image Caption Tool Implementation for Visually Impaired},
   booktitle = {2021 International Conference on Industrial Electronics Research and Applications (ICIERA)},
   pages = {1-6},
   DOI = {10.1109/ICIERA53202.2021.9726759},
   type = {Conference Proceedings}
}

@inproceedings{RN26,
   author = {Watters, J. and Liu, C. and Hill, A. and Jiang, F.},
   title = {An artificial intelligence tool for accessible science education},
   booktitle = {IMCIC 2020 - 11th International Multi-Conference on Complexity, Informatics and Cybernetics, Proceedings},
   volume = {1},
   pages = {147-150},
   note = {Export Date: 16 June 2025; Cited By: 3},
   url = {https://www.scopus.com/inward/record.uri?eid=2-s2.0-85085951599&partnerID=40&md5=8ff1be6533d855bc4f3338a78ba516db},

   type = {Conference Proceedings}
}

@article{RN29,
   author = {Yang, Hao and Ling, Yifan and Kopca, Cole and Ricord, Sam and Wang, Yinhai},
   title = {Cooperative traffic signal assistance system for non-motorized users and disabilities empowered by computer vision and edge artificial intelligence},
   journal = {Transportation Research Part C: Emerging Technologies},
   volume = {145},
   pages = {103896},
   note = {(Frank)},
   abstract = {Information and communication technology has many promising benefits including improvement the traffic network capacity, efficiency, and stability. However, to date, most of the improvements in signal management and interactions in connected vehicle environments focus solely on the vehicular side. This has led to a massive gap for non-motorized users and vulnerable road users. Specifically, deficit perception capability, inconsistent dissemination, obsolescent acquisition techniques, and ignorance of equality make the current experience of the active non-motorized users inconvenient and risky, especially for those with disabilities. To serve the users in an unbiased and automated way, a novel cooperated signal phase and timing (SPaT) services infrastructure — Vision Enhanced Non-motorized Users Services (VENUS) smart node is proposed. With customized up-to-date computer vision algorithms and artificial intelligence pipelines on the edge, VENUS smart node can collect necessary active-user information (including location, class, pose direction and mobility status), and generate directional crossing request for every pedestrian and cyclist in real time. Meanwhile, the improved communication system makes the VENUS node a reliable information hub to share the SPaT messages and carry interactions to/from the signal controller, connected vehicles and user personal information devices (i.e., cell phones, wearable devices) through various protocols. Based on extensive experimentation, 1076 testing users from six intersections, the VENUS sensing achieves 90.24% accuracy on directional-aware crossing trigger generation and 89.87% accuracy on mobility status estimation for normal users and four types of disabled persons. Furthermore, the VENUS smart node is fully compatible with the connected vehicles environment, and improves the signal system at low cost, mainly due to its flexibility and adaptability with existing infrastructure. The VENUS smart node is the first connected infrastructure architecture that integrates traffic sensing, data processing and information dissemination together for the self-operating indistinguishable signal services based on edge computing.},
   keywords = {Signal Phase and Timing (SPaT)
Edge computing
Smart infrastructure
Disability user
Computer vision
Connected vehicle},
   ISSN = {0968-090X},
   DOI = {https://doi.org/10.1016/j.trc.2022.103896},

   url = {https://www.sciencedirect.com/science/article/pii/S0968090X22003096},

   year = {2022},
   type = {Journal Article}
}



@misc{RN11,
   author = {Almeida, Rafael and Duarte, Carlos},
   title = {Analysis of automated contrast checking tools},
   publisher = {Association for Computing Machinery},
   pages = {Article 18},
   keywords = {accessibility, automated evaluation, color contrast, text},
   DOI = {10.1145/3371300.3383348},
   url = {https://doi.org/10.1145/3371300.3383348},

   year = {2020},
   type = {Conference Paper}
}

@article{RN2,
   author = {Ismailova, Rita and Inal, Yavuz},
   title = {Comparison of Online Accessibility Evaluation Tools: An Analysis of Tool Effectiveness},
   journal = {IEEE Access},
   volume = {10},
   pages = {58233-58239},
   ISSN = {2169-3536},
   DOI = {10.1109/access.2022.3179375},
   url = {https://dx.doi.org/10.1109/access.2022.3179375},

   year = {2022},
   type = {Journal Article}
}

@article{RN3,
   author = {Leotta, Maurizio and Mori, Fabrizio and Ribaudo, Marina},
   title = {Evaluating the effectiveness of automatic image captioning for web accessibility},
   journal = {Universal Access in the Information Society},
   volume = {22},
   number = {4},
   pages = {1293-1313},
   ISSN = {1615-5289},
   DOI = {10.1007/s10209-022-00906-7},
   url = {https://dx.doi.org/10.1007/s10209-022-00906-7},

   year = {2023},
   type = {Journal Article}
}

@article{RN8,
   author = {Lv, Zhihan},
   title = {Generative artificial intelligence in the metaverse era},
   journal = {Cognitive Robotics},
   volume = {3},
   pages = {208-217},
   abstract = {Generative artificial intelligence (AI) is a form of AI that can autonomously generate new content, such as text, images, audio, and video. Generative AI provides innovative approaches for content production in the metaverse, filling gaps in the development of the metaverse. Products such as ChatGPT have the potential to enhance the search experience, reshape information generation and presentation methods, and become new entry points for online traffic. This is expected to significantly impact traditional search engine products, accelerating industry innovation and upgrading. This paper presents an overview of the technologies and prospective applications of generative AI in the breakthrough of metaverse technology and offers insights for increasing the effectiveness of generative AI in creating creative content.},
   ISSN = {2667-2413},
   DOI = {https://doi.org/10.1016/j.cogr.2023.06.001},

   url = {https://www.sciencedirect.com/science/article/pii/S2667241323000198},

   year = {2023},
   type = {Journal Article}
}

@article{RN5,
   author = {Millett, Pam},
   title = {Accuracy of Speech-to-Text Captioning for Students Who are Deaf or Hard of Hearing},
   journal = {Journal of Educational, Pediatric & (Re)Habilitative Audiology},
   volume = {25},
   pages = {1-13},
   note = {research; tables/charts. Journal Subset: Allied Health. Grant Information: This research study was supported by Minor Research Grant from the Faculty of Education at York University, Toronto, Ontario, Canada.},
   abstract = {Speech-to-text technology (also referred to as automatic speech recognition, or ASR) is now available in apps and software, offering opportunities for deaf/hard of hearing students to have real time captioning at their fingertips. However, speech-to-text technology must be proven to be accurate before it should be considered as an accommodation for students. This study assessed the accuracy of eight apps, software and platforms to provide captions for i) a university lecture given by a native English speaker in real time ii) a video of the lecture, and iii) a conversation between 3 students in real time, using real speech under controlled acoustical conditions. Accuracy of transcribed speech was measured in two ways: a Total Accuracy score indicating % of words transcribed accurately, and as a Meaning Accuracy score, which considered transcription errors which impacted the meaning of the message. Technologies evaluated included Interact Streamer, Ava, Otter, Google Slides, Microsoft Stream, Microsoft Translator, Camtasia Studio and YouTube. For the lecture condition, 4 of 5 technologies evaluated exceeded 90% accuracy, with Google Slides and Otter achieving 98 and 99%% accuracy. Overall accuracy for video captioning was highest, with 5 of 6 technologies achieving greater than 90% accuracy, and accuracy rates for YouTube, Microsoft Stream and Otter of 98-99%. Accuracy for captioning a real time conversation between 3 students was greater than 90% for both technologies evaluated, Ava and Microsoft Translator. Results suggest that, given excellent audio quality, speech-to-text technology accuracy is sufficient to consider use by postsecondary students.},
   keywords = {Deafness
Hearing Disorders
Voice Recognition Systems
Software
Human
Descriptive Statistics
Students
Funding Source},
   ISSN = {2378-0916},
   url = {https://search.ebscohost.com/login.aspx?direct=true&AuthType=ip,shib&db=ccm&AN=155359972&site=ehost-live&custid=s3358796},

   year = {2021},
   type = {Journal Article}
}

@article{RN9,
   author = {Morris, Amanda},
   title = {For Blind Internet Users, the Fix Can Be Worse Than the Flaws},
   journal = {The New York times},
   keywords = {Actions and defenses
Automation
Blindness
Employees
Internet
Jurisprudence
Litigation
Software
Vision disorders},
   ISSN = {1553-8095},
   year = {2022},
   type = {Journal Article}
}

@misc{RN6,
   author = {Vigo, Markel and Brown, Justin and Conway, Vivienne},
   title = {Benchmarking web accessibility evaluation tools: measuring the harm of sole reliance on automated tests},
   publisher = {Association for Computing Machinery},
   pages = {Article 1},
   keywords = {WCAG, accessibility, benchmark, evaluation, testing, tools},
   DOI = {10.1145/2461121.2461124},
   url = {https://doi.org/10.1145/2461121.2461124},

   year = {2013},
   type = {Conference Paper}
}

@article{RN7,
   author = {Xu, Yongjun and Liu, Xin and Cao, Xin and Huang, Changping and Liu, Enke and Qian, Sen and Liu, Xingchen and Wu, Yanjun and Dong, Fengliang and Qiu, Cheng-Wei and Qiu, Junjun and Hua, Keqin and Su, Wentao and Wu, Jian and Xu, Huiyu and Han, Yong and Fu, Chenguang and Yin, Zhigang and Liu, Miao and Roepman, Ronald and Dietmann, Sabine and Virta, Marko and Kengara, Fredrick and Zhang, Ze and Zhang, Lifu and Zhao, Taolan and Dai, Ji and Yang, Jialiang and Lan, Liang and Luo, Ming and Liu, Zhaofeng and An, Tao and Zhang, Bin and He, Xiao and Cong, Shan and Liu, Xiaohong and Zhang, Wei and Lewis, James P. and Tiedje, James M. and Wang, Qi and An, Zhulin and Wang, Fei and Zhang, Libo and Huang, Tao and Lu, Chuan and Cai, Zhipeng and Wang, Fang and Zhang, Jiabao},
   title = {Artificial intelligence: A powerful paradigm for scientific research},
   journal = {The Innovation},
   volume = {2},
   number = {4},
   pages = {100179},
   ISSN = {2666-6758},
   DOI = {10.1016/j.xinn.2021.100179},
   url = {https://dx.doi.org/10.1016/j.xinn.2021.100179},

   year = {2021},
   type = {Journal Article}
}



Dr Scott Hollier
Chief Executive Officer
[Centre for Accessibility Australia logo]<https://www.accessibility.org.au/>
Centre For Accessibility Australia Ltd.
Phone: +61 (0)430 351 909
Email: scott.hollier@accessibility.org.au<mailto:scott.hollier@accessibility.org.au>
Address: Suite 5, Belmont Hub, 213 Wright Street, Cloverdale WA 6105
accessibility.org.au<https://www.accessibility.org.au/>
Subscribe to our newsletter<http://eepurl.com/drA-ib>

[X icon]<https://twitter.com/centrefora11y>[Instagram icon]<https://www.instagram.com/centreforaccessibility/> [Facebook icon] <https://www.facebook.com/centrefora11y/>  [LinkedIn icon] <https://www.linkedin.com/company/centreforaccessibility/>

CFA Australia respectfully acknowledges the Traditional Owners of Country across Australia and pay our respects to Elders past and present.


From: Scott Hollier <scott.hollier@accessibility.org.au>
Sent: Tuesday, 17 June 2025 8:34 PM
To: RQTF <public-rqtf@w3.org>
Subject: references added to AI wiki

To the RQTF

Following on from the action for putting references into the wiki that Jason created, the wiki has been updated with two reference sections – a collection of 41 references which are related to our research. This also includes two book chapters and a paper by Jutta which make for great reading. The second section is the references we’ve already included in the current draft.

I’ve also attached to this e-mail a copy of the same references and both lists in bibtext as that may help us later.

By memory there was a GitHub issue that commented about the need to pull together a reference list like this in which this action was responding to, so if a pointer to the wiki could be added there for that issue it would be great. Apologies if my formatting attempt in the wiki needs improving.

Also, unfortunately I have an evening Wednesday commitment this week with the Board of my organisation which will make it unlikely I’ll be able to join RQTF this week so sadly I’m likely to be an apology.

Thanks everyone

Scott.



Dr Scott Hollier
Chief Executive Officer
[Centre for Accessibility Australia logo]<https://www.accessibility.org.au/>
Centre For Accessibility Australia Ltd.
Phone: +61 (0)430 351 909
Email: scott.hollier@accessibility.org.au<mailto:scott.hollier@accessibility.org.au>
Address: Suite 5, Belmont Hub, 213 Wright Street, Cloverdale WA 6105
accessibility.org.au<https://www.accessibility.org.au/>
Subscribe to our newsletter<http://eepurl.com/drA-ib>

[X icon]<https://twitter.com/centrefora11y>[Instagram icon]<https://www.instagram.com/centreforaccessibility/> [Facebook icon] <https://www.facebook.com/centrefora11y/>  [LinkedIn icon] <https://www.linkedin.com/company/centreforaccessibility/>

CFA Australia respectfully acknowledges the Traditional Owners of Country across Australia and pay our respects to Elders past and present.

Received on Wednesday, 18 June 2025 02:26:32 UTC