CleanLinks was initially a webextension rewrite and is now a fork of @DiegoCR’s original XUL/XPCOM CleanLinks extension.
CleanLinks protects your private life, by automatically detecting and skipping redirect pages, that track you on your way to the link you really wanted. Tracking parameters (e.g. utm_* or fbclid) are also removed.
Eg:
- http://www.foobar.com/track=ftp://gnu.org ➠ ftp://gnu.org/
- http://example.com/aHR0cDovL3d3dy5nb29nbGUuY29t ➠ http://www.google.com
- javascript:window.open('http://somesite.com') ➠ http://somesite.com/
For maximum privacy, rules are maintained and editable locally (with decent defaults distributed in the add-on). CleanLinks will break some websites and you will need to manually whitelist these URLs for them to work. This can be done easily via the menu from the CleanLinks icon.
You can test the current (master) link cleaning code online, by pasting a link in the text area and clicking the "Clean it!" button.
This add-on protects your private life, by skipping intermediate pages that track you while redirecting you to your destination (1), or that divulge your current page while making requests (2). Any request that contains another URL is considered fishy, and skipped in favor of the target page (for foreground requests) or dropped (for background requests). A whitelist manages the pages and websites that have legitimate uses (3) of such redirects.
Some illustrative examples are:
- Facebook tracks all the outgoing links by first sending you to the page
https://l.facebook.com/l.php?u=<the actual link here>
which then redirects you to the URL. - Analytics report the page you are on, for example google analystics uses
https://www.google-analytics.com/collect?dl=<your current page>&<more info on what you do>
- Logging in through openid requires to pass the original URL so you can return to that page once the login is performed, e.g.
https://github.com/login/oauth/authorize?redirect_uri=<URL to the previous page>&<more parameters for the request>
All these embedded links are detected automatically. Links of type 1 should be redirected to their destination, those of type 2 (identified by the fact they are not “top-level” requests) should be dropped, and those of type 3 allowed through a witelist.
The whitelist is populated with a few sensible defaults, and must be maintained manually by each user as they encounter pages that break. A quick access to the last requests that were cleaned is available by clicking on the add-on icon. In this popup, all recently cleaned links for the tab appear, and these can be added to the whitelist definitively (“Whitelist Embedded URL” button) or for once only (“Open Un-cleaned Link” button).
Other tracking data can be added to the URLs to follow your behaviour online. These can be for example fbclid=
or utm_campain=
query parameters, or /ref=
in the pathname of the URL on amazon.
Those can not be detected automatically, so CleanLinks has a set of rules (the same that maintains the embedded URL whitelist) that specifies which data is used for tracking and should be removed from URLs.
Be part of the open-source community that helps each other browse safer and more privately !
Being part of a community means being respectful of everyone and keeping this environment friendly, welcoming, and harrasment-free. Abusive behaviour will not be tolerated, and can be reported by email at me@cimba.li − wrongdoers may be permanently banned from interacting with CleanLinks.
Any reports are welcome, including suggestions to improve and maintain the default rules that CleanLinks uses.
Maintaining even a small addon like CleanLinks is in fact very time consuming, so every little bit helps!
You can improve translations or add a new language on CleanLink’s POEditor page, where the strings will directly be imported into the add-on at the next release.
This is the current status of translations:
The permissions are listed in the manifest file and described in the API documentation. Here is a breakdown of why we need each of the requested permissions:
Permission | Show (on addons.mozilla.org) as | Needed for |
---|---|---|
clipboardWrite | Input data to the clipboard | Copying cleaned links from the context menu |
contextMenus | Not shown | Copying cleaned links from the context menu |
alarms | Not shown | Automatically saving options |
webRequest | Access your data for all websites | Clean links while they are accessed |
webRequestBlocking | Access your data for all websites | Clean links while they are accessed |
<all_urls> | Access your data for all websites | Clean javascript links, highlight cleaned links |
storage | Not shown | Store rules and preferences |
https://publicsuffix.org/list/public_suffix_list.dat | Not shown | Identifying public suffixes (e.g. .co.uk ) |
Except from the AMO page https://addons.mozilla.org/addon/clean-links-webext/, you can also get the addon straight from this repo. This is useful if you want to help testing for example.
-
Either get web-ext, and run
web-ext run
in theaddon/
source code directory. -
Alternately, temporarily load the add-on from
about:debugging#addons
, by ticking "Enable add-on debugging", clicking "Load Temporary Add-on" and selecting themanifest.json
file from the source code directory. -
Finally, you can build the add-on using
yarn bundle
orweb-ext -s ./addon -a ./dist build
in this repository’s top-level directory, and install the add-on from the file that was generated.