Accessibility in mind? A nationwide study of K-12 Web sites in the United States
First Monday

Accessibility in mind? A nationwide study of K-12 Web sites in the United States by Royce Kimmons and Jared Smith



Abstract
Web site accessibility is a serious civil rights concern that has historically been difficult to measure and to establish success criteria for. By conducting automated accessibility analyses via the WAVE tool, we calculated accessibility norms of a statistically appropriate, random sample of K–12 school Web sites across the U.S. (n = 6,226) and merged results with national datasets to determine how school demographics influence accessibility. Results indicated that schools across all demographic groups generally struggle to make their Web sites fully accessible to their universe of diverse users and revealed that the concrete, highest-impact steps that schools nationwide need to take to improve accessibility include improving poor contrast between text and backgrounds, providing alternative text to images and other visual elements, and labeling form controls.

Contents

Introduction
Methods
Results
Discussion
Limitations
Conclusion

 


 

Introduction

A high school student with a vision disability uses a screen reader to listen to graduation announcements posted on her school Web site but has difficulty finding where they are located. A middle-school student who suffers from congenital hearing loss is required to watch a YouTube video for homework but discovers that no captions or transcript are available. And a struggling grandmother raising an elementary-aged child on her own attempts to open a school-provided gradebook only to find that the text is too small for her to read. These scenarios likely unfold across the U.S. as students, parents, and community members attempt to access information from K–12 school Web sites only to find that these sites are not designed with their needs in mind.

In 1990, the U.S. passed the Americans with Disabilities Act (ADA), which prohibits discrimination based upon disabilities, also known as ableism. While the ADA does not define specific technical guidelines for Web content, it has been clarified by the U.S. Department of Justice (e.g., U.S. Department of Justice, 2015, 2003) that Web sites are covered under the provisions of the ADA. In other words, an inaccessible Web site can be considered discriminatory, and schools that do not provide their resources and systems in a universally accessible manner may be violating civil rights for a group that has been estimated to represent up to 12.6 percent of the U.S. population (Bialik, 2017). By comparison, the entire African American population of the U.S. represents an identical 12.6 percent (U.S. Census Bureau, 2011).

The U.S. Office for Civil Rights, whose mission it is “to ensure equal access to education and to promote educational excellence through vigorous enforcement of civil rights in our nation’s schools” (U.S. Department of Education, 2017), has responded to many complaints regarding accessibility that have been made by education stakeholders in recent years. By April 2018, their database reported over 1,300 resolution letters filed to school districts, universities, and state departments of education since 2013 on the topic of “Web site accessibility” (U.S. Department of Education, 2018). Though the complaint and resolution process has subsequently been scheduled for revision to ensure accuracy and fairness, it is nonetheless apparent that many K–12 schools across the U.S. have been required to implement procedures, such as hiring new personnel and initiating quality assurance measures, to make their Web sites more accessible to their constituents. As the day-to-day and community outreach efforts of schools become increasingly digital, this is a matter of concern for a variety of stakeholders, including special educators, who must meet the needs of diverse learners, educational leaders, who must account for the legal and ethical implications of their institutions’ accessibility measures, and educators, students, and parents generally, who benefit from improved universal access to critical online K–12 resources.

When putting specific school districts under scrutiny, a recent analysis found that a handful of the largest school district Web sites in the U.S. had “extensive accessibility issues” in the areas of perceivability, operability, understandability, and robustness (Bolkan, 2018). This underscores previous small-scale studies that have shown that 86 percent to 100 percent of targeted school Web sites persist in exhibiting a variety of accessibility issues over time (Klein, et al., 2003; Krach and Milan, 2009) and corroborates anecdotal sentiments we have heard from educational technologists and leaders that K–12 schools are generally struggling to make their Web site resources fully accessible to students and parents.

Yet, how widespread and serious is this issue? What impact is it having on students, parents, and communities? And, perhaps most importantly, what must we do to address this issue in a system-wide manner? These questions necessarily build upon each other, but currently even the first question lacks any semblance of a clear answer. To date, there is no generalizable evidence of how accessible K–12 Web sites are (or are not), and this lack of knowledge prevents us from determining how the issue affects families and how much systemic attention it merits.

The reason for our current lack of knowledge is likely two-fold. First, K–12 schools constitute a massive population to study in any way that allows for generalizability or that accounts for state and local differences. Current National Center for Education Statistics (NCES) data are available on 98,447 schools across all 50 states and the District of Columbia, and there is currently no reliable list available of all or even a representative sample of K–12 Web sites. This makes it very difficult to identify a representative sample of these schools’ Web sites that accounts for the various demographic and political complexities that shape their policies and procedures for further analysis. And second, Web site accessibility is a difficult construct to measure, because evaluators must rely upon various contextual and complicated judgments in determining whether resources are accessible to a universe of extremely diverse users (Rose and Meyer, 2002). The World Wide Web Consortium’s (W3C) (2008) provision of the Web Content Accessibility Guidelines (WCAG) touch on technical considerations ranging from the presentation of non-text content to keyboard operability of the interface, and many of these considerations rely upon highly complicated judgments that take into consideration a variety of factors ranging from technical tagging to semantic representation.

Yet, even determining if a Web site is accessible is problematic. To provide a proxy for accessibility, much effort has been given to developing automated procedures to identify accessibility issues, but such automated testing can only identify perhaps 20–30 percent of actual issues (Bureau of Internet Accessibility, 2018), presumably because automated procedures have difficulty replicating the multimodal, subjective, and interactive meaning-making processes that humans engage in with Web content (e.g., perception, operability, understandability and robustness). Thus, though automated processes may exhibit a high level of precision, they will nonetheless likely underrepresent accessibility issues (i.e., limited recall or the possibility of type II errors), which must always be accounted for in their interpretation. Just as a person with a vision disability might struggle to discern the meaning of text in an image or to determine whether a video caption is equivalent, so too do even the most sophisticated automated processes, thereby making the accurate identification of many accessibility issues difficult.

Furthermore, even how we conceptualize the results of accessibility testing is fraught with difficulties and may be framed in general terms (e.g., “this site is mostly accessible”) or by emphasizing instances of inaccessibility (e.g., “this element of the website has a serious problem”). This makes developing a general understanding of Web site accessibility difficult for a number of reasons, because stakeholders may have fundamental disagreements on what constitutes an accessible site (90 percent of elements are accessible) from an inaccessible site (10 percent of elements are inaccessible).

From this introduction, a few guiding assumptions seem apparent. First, Web site accessibility for K–12 schools might be a potential problem that has both ethical and legal ramifications, but we have no baseline, generalizable understanding of how widespread and severe the problem really is. Second, measuring and reporting on Web site accessibility is a difficult task to do at any scale, because accessibility is a difficult construct to measure and to report in meaningful ways. Thus, this study seeks to take a first step in addressing both of these issues by (1) conducting a first-of-its-kind, nationwide accessibility evaluation of K–12 school Web sites; and, (2) presenting results through four analytical frameworks that may be used for teasing out the overarching narrative of accessibility.

 

++++++++++

Methods

Our guiding research question for this study was: How accessible are K–12 Web sites and what are high-impact areas that leaders can focus on to improve accessibility? To answer this question, we began by operating from a pre-collected list of K–12 school Web site addresses in the U.S. (n = 65,404), which the lead author had collected in a previous study via a number of manual and automated means (Kimmons, et al., under review). Utilizing a random sample of roughly 10 percent of these addresses (n = 6,226), we utilized the WebAIM WAVE accessibility evaluation tool to conduct robust automated analyses on Web site homepages in April 2018. We then connected these results back to NCES school demographic data, which provided census results on 98,447 schools. Using this as an indicator for the total number of schools in the U.S., our randomized sampling approach allowed for generalizability of our results to all schools in the U.S. with a confidence interval of +/-1.75 percent at the 99 percent confidence level.

Homepage emphasis

For this analysis, we followed parameters used in previous studies that have intentionally limited accessibility analyses to homepages (e.g., Jaeger, 2006). The reasons for this are manifold. First, homepages represent a singular artifact of analysis that should be exemplary in terms of accessibility, because they often operate as the entry point to all other content on a Web site. Thus, one would expect homepages to serve as above-average proxies for the overall accessibility of a complete Web site. So, if homepages exhibit accessibility problems, then we would anticipate this to be a conservative representation of the site’s overall accessibility, and in fact, if homepages struggle in this regard, then the accessibility of secondary pages may be a moot point, because a user’s ability to access secondary pages is often dependent upon the homepage. Second, sites vary in the quantity of secondary pages that they utilize and may range from a handful to many thousand. This makes comparisons between sites difficult, because they represent artifacts of exponentially different magnitudes. This presents a comparison problem as well as a logistical problem, because it might take as much processing time and power to analyze a single Web site as it would to analyze thousands of homepages. For these reasons, we limited our analysis to homepages and used them as an optimistic proxy for what school websites might look like more generally.

WAVE tool

To conduct automated accessibility analyses of these websites, we used WebAIM’s WAVE API, which provides an overall automated accessibility metric for pages based on error density — a function of the number of errors on a page and number of page elements — as well as total error and alert counts to help guide developers in making their sites more accessible. The various versions of WAVE — an online version at http://wave.webaim.org/, Chrome and Firefox browser extensions, and API (application programming interface) — have been used by hundreds of thousands of users to analyze millions of Web pages. The core component of WAVE is a Web accessibility analysis engine that has been developed and refined over 18 years. The WAVE evaluation engine uses complex logic or ‘rules’ for evaluating a wide array of content or functional components that may appear in Web content. Each WAVE rule is categorized as an accessibility error, alert, feature, structural element, or HTML5/ARIA element. These rules and categories are primarily based on established Web accessibility standards, such as WCAG 2.0 (2008) and the W3C’s WAI–ACT framework (https://www.w3.org/WAI/ACT/).

Analytical framework

Since determining accessibility is a difficult process that must consider a variety of indicators and address various levels of severity, we proceeded to answer our research question by organizing WAVE and statistical analyses according to four different framing constructs: WCAG 2.0 failures, error density rates, error counts, and alert counts. Each of these framing constructs merits description and explanation.

WCAG failure rates. First, the 2.0 version of the Web Content Accessibility Guidelines (WCAG) created by the World Wide Web Consortium (W3C) provides four overarching guidelines with 12 guidelines and 61 success criteria. Conformance with WCAG is measured at the success criteria level, with each success criterion assigned a level — A, AA, or AAA — with A and AA conformance generally recognized as the acceptable standard for Web site accessibility. Errors identifiable via WAVE can readily and reliably be aligned to failures of seven WCAG 2.0 success criteria:

  • 1.1.1 — Non-text Content (Level A)
         ○ Image and/or image button is missing alternative text, form control is not labeled, link or button is empty, or ARIA reference is broken
  • 1.3.1 — Info and Relationships (Level A)
         ○ Form control is not labeled or table header is empty
  • 2.2.2 — Pause, Stop, Hide (Level A)
         ○ Page refreshes without user control, blinking or scrolling content
  • 2.4.2 — Page Titled (Level A)
         ○Page title is missing or is not descriptive
  • 2.4.4 — Link Purpose (In Context) (Level A)
         ○ Link, linked image, or button has no alternative text or value
  • 2.4.6 — Headings and Labels (Level AA)
         ○Heading is empty
  • 3.1.1 — Document Language (Level A)
         ○ The document language is not specified

Identification of these failures allows determination of non-conformance to WCAG criteria. Failure to meet any of these success criteria constitutes failure to address the overarching guidelines at the appropriate level (A or AA) of that particular criterion. While WAVE can only provide highly-reliable non-conformance analysis of seven success criteria, these particular criteria tend to be the most impactful on users with disabilities, and WCAG 2.0 has primarily been the metric for determining discrimination under the Americans with Disabilities Act, wherein WCAG 2.0 Level A/AA conformance has been required of numerous K–12 schools as part of Office of Civil Rights structured negotiations.

Error density rates. Second, error density rates represent the volume of element errors in comparison to the number of elements present within the page. Consider that a page consisting only of one image, which exhibits one image error (an error density of 100 percent) may be wholly inaccessible, while a page with 20 images and one image error (an error density of five percent) may be very accessible (Lopes, et al., 2010). Thompson, et al. (2013) similarly calculated accessibility as a percent rather than as a game of finding single or isolated errors within a Web page. Such approaches are also common in software quality testing, where error density consists of post-release defects per unit of product size, such as per thousand lines of code (Shah, et al., 2013). Other approaches measure the number of defects per function points or features (Desai and Srivastava, 2012). Defect density measures the quality of a software product and can be used to indicate quality improvement or decline in successive releases. Ostensibly, the lower the defect density, the better the software quality (Javed and Alenezi, 2016), and such error densities can be monitored over time to determine quality fluctuations across release cycles (Mitterer, 2008).

For Web content and applications, numerous measures of page size and complexity have been explored (Calero Muñoz, et al., 2008). Because code length or size does not directly correlate to Web page content, functionality, or complexity, and because WAVE errors are identified at the page element level (e.g., each image, button, paragraph of text, container element, etc. represents a page element or DOM node) in the rendered Web page, the number of detectable errors divided by number of page elements is used to calculate error density.

Interpreting accessibility in this manner allows us to identify accessibility issues as a representation of all technical elements wherein errors may be contextually interpreted in terms of the amount of content they represent rather than as a series of all-or-nothing, pass-or-fail compliance tests. It can also allow us to determine whether errors reflect systematic ignorance (e.g., all elements of a particular type being incorrectly used) or as singular accidents (e.g., one image in 20 being misused).

Error counts. Third, because the median homepage in our sample had over 600 elements, and the most complex pages approached 10,000 elements, even relatively low error density rates might translate into various difficulties as users begin parsing through element-heavy pages. For this reason, a calculation of the sheer volume of errors present in these pages is also important to understand, because each one represents a potential barrier to a user while using these sometimes highly complicated sites. Furthermore, the same types of errors may be replicated many times on the same page, meaning that ignoring a guideline for a single element can exponentially impact users if that element is critical to page functionality or is used frequently.

Alert counts. And finally, though error counts provided a baseline understanding or lower-range approximation of accessibility issues on these Web sites, few end user issues are fully detectable in an automated fashion, and oftentimes automated processes can at best provide warnings or alerts to human evaluators to look more closely at suspicious items that might exhibit an error (e.g., an image with a file name like “headshot.jpg” as alternative text which might not meaningfully describe the image). To illustrate, we conducted human coding of a random sample of K–12 homepage images (n = 150) and found that only 30.67 percent of instances used image alternative text correctly. Incorrect uses of alternative text took four forms: (1) not providing an alt attribute (13.33 percent); (2) improperly using a null/empty alt attribute value for non-decorative images (17.33 percent); (3) providing incomplete or inaccurate descriptions in the alt tag (36.67 percent); and providing an unnecessary description for decorative images (two percent). The null/empty alt attribute value is of particular interest here, because only three of the 29 instances of its use in the random sample were proper (i.e., for a purely decorative image), revealing a 90 percent error rate in null/empty alt use. Thus, in reality, alerts actually represent many error instances that are simply too difficult to determine with complete certainty via automated processes, but identifying alert rates can give us a sense for how widely problems might extend and areas toward which human evaluators should focus their attentions when determining actual accessibility.

 

++++++++++

Results

Our overarching research question was very broad, complicated, and difficult to answer. To approach it in a rigorous and meaningful way, we will organize our results through each of the four analytic lenses described above: WCAG failure rates, error density rates, error counts, and alert counts.

WCAG failure rates

Table 1 provides descriptive results of WCAG failure rates as well as included item codes that were evaluated to determine failures. Overall, 65.9 percent of homepages exhibited failure on at least one of the seven WCAG 2.0 success criteria we tested. The most commonly failed success criterion involved textual alternatives to non-text content (63.6 percent), followed by the link purpose being discernible from contextual text (42.2 percent), and structural info and relationships being clear for elements such as form inputs and headings (30.1 percent). Of note, WCAG 1.1.1 errors had a very high correlation to the presence of any other WCAG error (r2 = .82), meaning that it can serve as a strong predictive indicator of a site’s overall WCAG failure likelihood.

 

Frequencies of WCAG 2.0 failure rates
 
Table 1: Frequencies of WCAG 2.0 failure rates.

 

To determine how school demographic factors influenced these failure rates, we conducted a series of chi-square tests of independence and found that relationships were significant in the cases of charter status, χ2 (1) = 56.43, p <.001; urbanity, χ2 (3) = 97.33, p <.001; Title I status, χ2 (1) = 10.26, p =.001; free-and-reduced lunch rate quartile, χ2 (3) = 16.63, p =.001; and student-teacher-ratio quartile, χ2 (3) = 17.5, p =.001. Thus, charter, city, and non-Title I schools were more likely to fail the WCAG guidelines than were their counterparts, and free-and-reduced lunch rates and student-teacher ratios also had a significant (though non-linear) effect (cf., Table 2). However, the strength of these effects were relatively low, with Phi absolute values ranging from .04 to .13.

 

Chi-square test results of WCAG failure rates by school demographic
 
Table 2: Chi-square test results of WCAG failure rates by school demographic.

 

Error density rates

Overall, K–12 homepages exhibited relatively low error density (M = 3.7 percent; SD = 4.3 percent) with positive skewness and a high range of variability (0 percent to 44.9 percent), meaning that programmatically testable elements were generally used properly by most schools, but that for roughly every 27 elements (such as an image, heading, or link) the Web sites exhibited a detectable error. Given the nonparametric nature of the data, we conducted a series of Kruskal-Wallis H tests to determine the effects of school demographics on error density rates (cf., Table 3). Results indicated that significant differences in error density rates existed between schools based on charter status, χ2 (1) = 152.46, p <.001; urbanity, χ2 (3) = 71.05, p <.001; school size quartile, χ2 (3) = 20.4, p <.001; free-and-reduced lunch rate quartile, χ2 (3) = 13.23, p =.01; and student-teacher ratio quartile, χ2 (3) = 14.03, p <.01. An additional Kruskal Wallis H Test that utilized the school’s state as the independent variable found that error density rates were significantly impacted by the school’s governing state, χ2 (50) = 323.89, p <.001, with Hawaii (73rd percentile) and District of Columbia (67th percentile) schools exhibiting the highest mean ranks and North Dakota (30th percentile) and Wyoming (32nd percentile) schools exhibiting the lowest.

 

Kruskal-Wallis H test results of demographic factor effects on error density rates
 
Table 3: Kruskal-Wallis H test results of demographic factor effects on error density rates.

 

Error counts

Table 4 provides results on the most common errors that were present on the K–12 homepages in terms of their percent likelihood of being present and their average frequency of appearance. Contrast errors were the most prevalent both in terms of likelihood (89.3 percent of all pages) and frequency (19.8 errors on each page). Notably, contrast errors also typically represent WCAG 2.0 failures, though these were not included in the WCAG Failure Rates above because they are not as reliably aligned to WCAG failures. This does, however, suggest that the actual WCAG failure rate is much higher than documented above. Contrast errors were followed by missing alternative text, missing label, and empty link errors, each of which were represented in up to 33.9 percent of homepages at a frequency of up to 6.2 errors per page. On average, K–12 homepages exhibited 24.3 total errors, with 95.5 percent of pages exhibiting an error of some kind. Table 4 also provides a determination on the most likely source of such errors (i.e., content vs. system), which we will explain more in our discussion below.

 

Site likelihood, average frequency (when present), most likely intervention level, and descriptions of most common errors
 
Table 4: Site likelihood, average frequency (when present), most likely intervention level, and descriptions of most common errors.

 

Because contrast errors were the most prevalent type, we decided to conduct further analysis to determine if this particular error was influenced by school demographics factors. Because contrast error rates exhibited highly positive skewness (σ = 9.21), we again utilized a series of Kruskal Wallis H tests (cf., Table 5). Results indicated that demographics affected contrast errors in terms of school type, χ2 (2) = 11.99, p <.01; magnet status, χ2 (1) =9.09, p <.01; charter status, χ2 (1) = 19.98, p <.001; urbanity, χ2 (3) = 73.48, p <.001; Title I status, χ2 (1) = 5.56, p =.02; and school size quartile, χ2 (3) = 19.58, p <.001.

 

Site likelihood, average frequency (when present), most likely intervention level, and descriptions of most common errors
 
Table 5: Site likelihood, average frequency (when present), most likely intervention level, and descriptions of most common errors.

 

Alert counts

Table 6 provides data on the most common alerts that were present on the K–12 homepages in terms of their percent likelihood of being present and their average frequency of appearance. Redundant links were the most common in terms of likelihood (82.4 percent of all pages), but redundant titles exhibited the highest frequencies on pages where they occurred (43.6 errors on each page). A variety of other common alerts included suspicious links, skipped headings, justified text, and missing labels and headers, among others. On average, K–12 homepages exhibited 44.2 total alerts, with 99.7 percent of pages exhibiting an alert of some kind. Furthermore, if we couple this with the error counts reported above, we find that K–12 homepages exhibited an average total of 68.5 alerts or errors, with 99.95 percent of pages exhibiting an alert or error of some kind. While alerts do not necessarily represent an end user accessibility issue, a high alert count may be indicative of potential problems.

 

Site likelihood, average frequency (when present), most likely intervention level, and descriptions of most common errors
 
Table 6: Site likelihood, average frequency (when present), most likely intervention level, and descriptions of most common errors.

 

 

++++++++++

Discussion

As we consider the implications of the four result sections provided above, a common narrative seems to emerge, which shows (1) that school Web sites generally struggled with accessibility; (2) that though some common school demographic factors exacerbated these issues (e.g., charter status, governing state, urbanity), most variance was based upon individual school circumstances; and, (3) that there are some clear areas where policy-makers and leaders can focus their attentions to help improve school Web site accessibility for all.

First, the results of this study should clearly prove that K–12 Web site accessibility is a nationwide problem that requires attention. Nearly two-thirds of schools failed at least one of the measurable WCAG guidelines. 89.3 percent of schools had contrast issues, which typically represent a WCAG failure. 95.5 percent of school home pages had a detectable error of some kind, with the average site having over 24 errors. The WAVE-identifiable errors that were most prevalent — poor contrast, images missing alternative text, unlabelled form controls — also have high levels of impact for users with disabilities trying to consume Web site content. Though error density rates seemed promising in some regards (M = 3.7 percent), our optimism should be curbed by the simple fact that a single, severe accessibility error can make a Web page wholly or partially inaccessible to entire groups of people. Thus, it seems that the only acceptable error density goal should be zero and that any WCAG guideline failure or detectable error might constitute a possibly serious accessibility concern for students, parents, and community members.

Second, though these problems were at least somewhat mediated by school- and state-level factors, it seems that the primary determinant of Web site accessibility is the individual school itself. Thus, this is not so much an issue of poor vs. rich or small vs. large schools as it is a problem that each individual school must account for on its own through awareness and remedy. In other words, inaccessibility is both rampant and universal, and though some states might fare better than others in head-to-head comparisons, every state has room for improvement and so does almost every school.

And third, there are clear areas where we can focus our attentions to address these problems. As documented in the tables above, most of the errors and alerts were identified as being entirely or partially at the “system” intervention level. This means that most can be addressed at the content management system (CMS) or page template level and constitute minor code changes that would alleviate numerous accessibility errors. Site-wide changes in colors, for example, might only require changing one line of CSS code but might address contrast issues across hundreds of pages on a site. Similarly, implementation of standards-compliant form templates would allow highly accessible forms in all instances. Many K–12 CMS providers have made recent efforts to provide more accessible template and content management options, though K–12 schools often are not yet up-to-date with these accessible options or may be using an older CMS. While many critical issues can be addressed at the template level, content authors must also be informed of basic accessibility techniques (e.g., providing alternative text on images) in order to ensure accessibility of the content they provide.

To best support Web accessibility, districts and schools should consider policies and implementation plans that consider accessibility in procurement (particularly of electronic systems), content and document authoring, and training of personnel, and ongoing testing of Web content by each school is necessary to ensure highly accessible Web content. Furthermore, while automated testing can be informative, manual testing is also necessary to ensure an optimally accessible end user experience on factors that may be difficult for an automated system to detect.

Though this study has provided a first-of-its kind baseline of U.S. school Web site accessibility at a general level, future studies could support more prescriptive guidance for site owners and specific CMS providers on patterns of automatically-detectable accessibility issues. Thus, though this study helps to establish an initial awareness of problems as they currently stand, future work needs to focus on addressing these problems at scale and determining changes over time (e.g., by tracking error density rates across multiple years to determine if schools are generally improving).

 

++++++++++

Limitations

The two major limitations of this study were that (1) we limited our analysis to homepage content (i.e., not secondary pages); and, (2) we utilized automated processes to detect errors and alerts. Each of these limitations represents a conservative approach to error detection in which there seem to be potentials for Type II research errors to arise (i.e., missing errors that actually exist) but Type I errors should be minimized (i.e., the errors we identified were accurate). This means that our results should be interpreted conservatively to mean that accessibility issues are at least as common as we have found but may in fact be much more common (e.g., as typified by alert counts).

 

++++++++++

Conclusion

This study has provided the first large-scale, landscape view of K–12 Web site accessibility in the U.S. Overall, we found that school Web site homepages struggled with various accessibility considerations, and we highlighted a number of areas of emphasis that should guide school administrators and policy makers in improvement. Some key areas of improvement that most schools should address include improving poor contrast between text and backgrounds, providing alternative text to images and other visual elements, and labeling form controls. Though not an exhaustive list, these results and recommendations give us a baseline for moving forward that can guide large-scale interventions in this area and incremental improvements over time. Though these Web sites may not currently be universally accessible to all U.S. students, parents, and communities, awareness, ongoing work, and further development of automated testing approaches seem to provide a promising future that hopefully includes more open and accessible Internet resources for all. End of article

 

About the authors

Dr. Royce Kimmons is an Assistant Professor of Instructional Psychology and Technology at Brigham Young University where he studies digital participation divides specifically in the realms of social media, open education, and classroom technology use.
Web: http://roycekimmons.com
Twitter: @roycekimmons
E-mail: roycekimmons [at] gmail [dot] com

Jared Smith is the Associate Director of WebAIM, a Web accessibility consultancy based at the Center for Persons with Disabilities at Utah State University. He has almost 20 years of experience in the Web design, development, and accessibility field helping others create and maintain highly accessible Web content. Much of his written work is featured on the WebAIM.org site.
Twitter: @jared_w_smith
E-mail: j [dot] smith [at] usu [dot] edu

 

References

K. Bialik, 2017. “7 facts about Americans with disabilities,” Pew Research Center (27 July), at http://www.pewresearch.org/fact-tank/2017/07/27/7-facts-about-americans-with-disabilities/, accessed 10 January 2019.

J. Bolkan, 2018. “Large district websites have ‘extensive accessibility issues’,” THE Journal (23 February), at https://thejournal.com/articles/2018/02/23/large-district-websites-have-extensive-accessibility-issues.aspx, accessed 10 January 2019.

Bureau of Internet Accessibility, 2018. “WCAG 2.0 A/AA automated site review,” at https://www.boia.org/a11yreports/, accessed April 2018.

C. Calero Muñoz, M. Ángeles Moraga, and M. Piattini (editors), 2008. Handbook of research on Web information systems quality. Hershey, Pa.: Information Science Reference.

S. Desai and A. Srivastava, 2012. Software testing: A practical approach. Delhi: PHI Learning Private. Ltd.

P.T. Jaeger, 2006. “Assessing Section 508 compliance on federal e-government Web sites: A multi-method, user-centered evaluation of accessibility for persons with disabilities,” Government Information Quarterly, volume 23, number 2, pp. 169–190.
doi: https://doi.org/10.1016/j.giq.2006.03.002, accessed 10 January 2019.

Y. Javed and M. Alenezi, 2016. “Defectiveness evolution in open source software systems,” Procedia Computer Science, volume 82, pp. 107–114.
doi: https://doi.org/10.1016/j.procs.2016.04.015, accessed 10 January 2019.

R. Kimmons, E. Hunsaker, J. E. Jones, and M. Stauffer, under review. “The nationwide landscape of K-12 school websites in the United States: Systems, services, intended audiences, and adoption patterns.”

D. Klein, W. Myhill, L. Hansen, G. Asby, S. Michaelson, and P. Blanck, 2003. “Electronic doors to education: Study of high school website accessibility in Iowa,” Behavioral Sciences & the Law, volume 21, number 1, pp. 27–49.
doi: https://doi.org/10.1002/bsl.521, accessed 10 January 2019.

S.K. Krach and J. Milan, 2009. “The other technological divide: K–12 Web accessibility,” Journal of Special Education Technology, volume 24, number 2, pp. 31–37.
doi: https://doi.org/10.1177/016264340902400203, accessed 10 January 2019.

R. Lopes, D. Gomes, and L. Carriço, 2010. “Web not for all: A large scale study of Web accessibility,” W4A ’10: Proceedings of the 2010 International Cross Disciplinary Conference on Web Accessibility (W4A), article number 10.
doi: https://doi.org/10.1145/1805986.1806001, accessed 10 January 2019.

A. Mitterer, 2008. “Development and implementation of an automated test environment for a data processing system,” Master’s thesis, Institute for Software Technology, Graz University of Technology, at https://diglib.tugraz.at, accessed 10 January 2019.

D.H. Rose and A. Meyer, 2002. Teaching every student in the digital age: Universal design for learning. Alexandria, Va.: Association for Supervision and Curriculum Development.

S.M.A. Shah, M. Morisio, and M. Torchiano, 2013. “Software defect density variants: A proposal,” 2013 Fourth International Workshop on Emerging Trends in Software Metrics (WETSoM), pp: 56–61.
doi: https://doi.org/10.1109/WETSoM.2013.6619337, accessed 10 January 2019.

T. Thompson, D. Comden, S. Ferguson, S. Burgstahler, and E.J. Moore, 2013. “Seeking predictors of Web accessibility in U.S. higher education institutions,” Information Technology and Disabilities Journal, volume 13, number 1, at http://itd.athenpro.org/volume13/number1/thompson.html, accessed 10 January 2019.

U.S. Census Bureau, 2011. “Overview of race and Hispanic origin: 2010,” 2010 Census Briefs, at https://www.census.gov/prod/cen2010/briefs/c2010br-02.pdf, accessed 10 January 2019.

U.S. Department of Education, 2018. “OCR search resolution letters and agreements,” at https://www.ed.gov/ocr-search-resolutions-letters-and-agreements, accessed April 2018.

U.S. Department of Education, 2017, “Office for Civil Rights,” at https://www2.ed.gov/about/offices/list/ocr/index.html, accessed April 2018.

U.S. Department of Justice, 2015. “Ensuring access to jobs for people with disabilities” (12 May), at https://www.justice.gov/archives/opa/blog/ensuring-access-jobs-people-disabilities, accessed 10 January 2019.

U.S. Department of Justice, Civil Rights Division, 2003. “Accessibility of state and local government Web sites to people with disabilities,” at https://www.ada.gov/websites2.htm, accessed 10 January 2019.

World Wide Web Consortium (W3C), 2008. “How to meet WCAG 2.0 (quick reference),” at https://www.w3.org/WAI/WCAG20/quickref, accessed April 2018.

 


Editorial history

Received 25 May 2018; accepted 4 January 2019.


Creative Commons License
This paper is licensed under a Creative Commons Attribution 4.0 International License.

Accessibility in mind? A nationwide study of K-12 Web sites in the United States
by Royce Kimmons and Jared Smith.
First Monday, Volume 24, Number 2 - 4 February 2019
https://firstmonday.org/ojs/index.php/fm/article/view/9183/7722
doi: http://dx.doi.org/10.5210/fm.v24i2.9183





A Great Cities Initiative of the University of Illinois at Chicago University Library.

© First Monday, 1995-2019. ISSN 1396-0466.