The Rust web accessibility engine.
[dependencies]
accessibility-rs = "^0.1"
use accessibility_rs::{audit, AuditConfig};
fn main() {
let html = r###"<html lang="en">
<body>
<a href="routes.html">
<img src="topo.gif">
Golf
</a>
</body>
</html>"###;
let css = "";
// pass in raw html, optional css, bounding box clips, and locale for audit
let audit = accessibility_rs::audit(&AuditConfig::new(&html, &css, false, "en"));
println!("{:?}", audit);
}
With the Tokio runtime.
[dependencies]
accessibility-rs = { version = "^0.1", features = ["tokio"]}
use accessibility_rs::{audit, AuditConfig};
use tokio;
#[tokio::main]
async fn main() {
let html = r###"<html lang="en">
<body>
<a href="routes.html">
<img src="topo.gif">
Golf
</a>
</body>
</html>"###;
let css = "";
// pass in raw html, optional css, bounding box clips, and locale for audit
let audit = accessibility_rs::audit(&AuditConfig::new(&html, &css, false, "en")).await;
println!("{:?}", audit);
}
With the Spider full website crawling.
[dependencies]
accessibility-rs = { version = "^0.1", features = ["spider"]}
use accessibility_rs::{audit, AuditConfig};
use tokio;
#[tokio::main]
async fn main() {
let audit = accessibility_rs::audit(&AuditConfig::new_website(&"https://choosealicense.com".into())).await;
println!("{:?}", audit);
}
If you need to use concurrency use TendrilSink.
use accessibility_rs::{fast_html5ever, Auditor, Html};
use fast_html5ever::driver::{self, ParseOpts};
use tendril::TendrilSink;
let parser = driver::parse_document(
Html::new_document(),
ParseOpts::default(),
);
let document = parser.one("<html>MY html code </html>");
let auditor = Auditor::new(
&document, &"", false, &"en",
);
let issues =
accessibility_rs::engine::audit::wcag::WCAGAAA::audit(auditor)
.await;
Module documentation with examples.
- Accurate web accessibility WCAG audits.
- Incredibly fast nanosecond audits.
- Ideal shapes for audits that scale.
- Shortest path CSS selectors for elements.
- i18n support for multiple languages.
- Re-creating layout tree to get element position coordinates.
- Crawling full websites lightning-fast using spider.
- Low-level built to be used as an engine in browsers.
audit-speed/core/audit: small html (4k iterations)
time: [55.689 µs 56.246 µs 57.110 µs]
audit-speed/core/audit: medium html (4k iterations)
time: [824.07 µs 830.30 µs 839.37 µs]
audit-speed/core/audit: large html (4k iterations)
time: [1.1206 ms 1.1260 ms 1.1321 ms]
audit-speed/core/audit: spider audit html (4k iterations)
time: [263.33 ms 266.55 ms 269.93 ms]
- Wasm example view kayle_innate.
- Example integrating with a headless browser.
- tokio: Enable tokio async runtime handling. Recommended for high freq server usage.
- rayon: Parallelism with rayon. (Expensive test future handling)
- rayon_wasm: Enable the wasm runtime for rayon.
- spider: Crawl entire websites using spider. Full website audits of 100-1,000 pages within milliseconds.
To help improve the rules the following needs to be done:
- Add the rule to the tracking list - you can use the standards list and mappings here for help.
- Add the logic of handling the rule to wcag_rule_map and the techniques.
- Add unit test.
This project is licensed under either of
- Apache License, Version 2.0, (LICENSE_APACHE or https://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE_MIT or https://opensource.org/licenses/MIT)