For this demo we're writing an extension that will add a context menu over images. When clicked, the image will be sent to the background, manipulated, converted to a data URI, and returned to the page, where it will swapped for the original image's src
attribute, hopefully resulting in an immediate change of appearance.
It's starting to get a little complicated in here, so let's do some refactoring.
First, we'll move the function that asks our background process to create context menus from the business logic (which may not run on all pages) to our content file, which does. Why do this? Further on down the line we may decide to show different context menus (or none at all) depending on what page we're on. Our content script looks like this now:
// send a message to the background requesting our business logic
chrome.runtime.sendMessage({
cmd: "runLogic"
});
// send a message to the background requesting that a context menu be attached
chrome.runtime.sendMessage({
cmd: "addMenu"
});
Our business logic finally has something to do: update an image when the background sends a replacement.
// listen for messages from the background
chrome.runtime.onMessage.addListener(message => {
// should we render our converted image?
if (message.oldImageSrc) {
// filter an array of all images on the page
Array.from(document.getElementsByTagName("IMG")).filter(img => {
// find all images matching the source of the one we right-clicked
if (img.src && img.src === message.oldImageSrc) {
// swap in the updated source
img.src = message.newImageSrc;
}
});
}
});
Our background process is now listening for two different cmd
values, addMenu
and runLogic
. Let's abstract those out of the listener into an easier-to-reference cmd
object:
// a list of commands we are willing to carry out when content scripts ask
const cmd = {
// clear, add, and listen to new context menu (no args required)
addMenu: () => {
// don't try to duplicate this menu item
chrome.contextMenus.removeAll();
// create a menu
chrome.contextMenus.create({
id: "BaselineV2",
title: "V2 Baseline",
// show the menu over images
contexts: ["image"],
// old-school inline interaction handler
onclick: event => {
// because the click came from a global UI element we need to figure out which tab it came from
chrome.tabs.query({ active: true, currentWindow: true }, function(
tabs
) {
// pass the event AND the tab ID, which should be in tabs[0]
handleMenuClick(event, tabs[0].id);
});
}
});
},
runLogic: (request, sender) => {
chrome.tabs.executeScript(sender.tab.id, {
file: "js/logic.js"
});
}
};
Here's our onMessage
listener, which sends along both the request
and sender
params, useful when we need to do something on a specific tab.
// listen for messages
chrome.runtime.onMessage.addListener((request, sender, sendResponse) => {
if (request.cmd) {
// can we carry out this command?
if (typeof cmd[request.cmd] === "function") {
cmd[request.cmd](request, sender);
}
}
});
When someone clicks our menu, we'll see if we can find the image it's over, do stuff with the image, and send it back:
// what to do when someone clicks our menu item
const handleMenuClick = (event, tabId) => {
getImageData({ oldImageSrc: event.srcUrl }).then(result => {
// our updated image has rendered
// throw it back to the content script, which will render it in place of the original
chrome.tabs.sendMessage(tabId, {
// find this image
oldImageSrc: result.oldImageSrc,
// re-render it with this as its src
newImageSrc: result.newImageSrc
});
});
};
Here's getImageData
, which will work unchanged on a Web page as long as the image it's trying to load shares the same domain as the page:
/ convert an image URL into a data:URI
const getImageData = request => {
// being able to wrap all of this in a single promise is super handy
return new Promise(resolve => {
// make a new image
let img = new Image();
// once the image has loaded, do all this
img.onload = () => {
// make a canvas
let canvas = document.createElement("CANVAS");
// set canvas dimensions to the image's real dimensions
canvas.height = img.naturalHeight;
canvas.width = img.naturalWidth;
// run the image through a transformation function
deRezz(img, canvas);
// all done? resolve our promise
resolve({
status: "OK",
oldImageSrc: request.oldImageSrc,
newImageSrc: canvas.toDataURL("image/png")
});
};
// this should never happen, since the image has already rendered to the browser
img.onerror = function() {
resolve({ status: "error", url: request.oldImageSrc });
};
// setup is complete; source the image to load
img.src = request.oldImageSrc;
});
};
Finally here's the bit that transforms the image data and sends it back. Feel free to substitute the awesome generative art routine of your choice!
// transform an image into very large pixels
const deRezz = (img, canvas) => {
// how big will our squares be?
const chunkSize = 20;
// a canvas has several possible contexts; this is the one that will let us draw an image
let context = canvas.getContext("2d");
// disable image smoothing, which may give us different results for different browsers
context.imageSmoothingEnabled = false;
// draw the image full-size, to exactly fill canvas
context.drawImage(img, 0, 0, canvas.width, canvas.height);
// convert what's on the canvas to an image data object
let imageData = context.getImageData(0, 0, canvas.width, canvas.height).data;
// for our demo: downsample into big blocks of color
let row, column, pixel;
for (row = 0; row < canvas.height; row = row + chunkSize) {
for (col = 0; col < canvas.width; col = col + chunkSize) {
pixel = (row * canvas.width + col) * 4;
context.fillStyle =
"#" +
("00" + imageData[pixel].toString(16)).substr(-2, 2) +
("00" + imageData[pixel + 1].toString(16)).substr(-2, 2) +
("00" + imageData[pixel + 2].toString(16)).substr(-2, 2);
context.fillRect(col, row, chunkSize, chunkSize);
}
}
};
Right, so ... this approach does not work for Manifest V3. Here are the two main problems:
- Because it's a service worker, background.js has no
document
, so we can't build acanvas
tag or load animg
for manipulation vianew Image()
ordocument.createElement
. - Instead of using DOM manipulation we will need to use a
BackgroundCanvas
and load the image via thefetch
API, since where there's nodocument
there's also noXMLHttpRequest
. - Sadly, the
fetch
API seems to be fully subject to cross-domain restrictions, so it won't load most of the images I've tried. - There's discussion about this on the Chromium Extensions group so we may not have our final answer.
The workaround gives us an exciting opportunity to have more fun with message passing and crack into web_accessible_resources
.
As it turns out, iframes sourced from our extension have the best of all worlds: they are unstoppable by the host page and have enough privilege to load and access data from any image I've found so far.
In our manifest we've added a web_accessible_resources
array with one entry:
"web_accessible_resources": ["/html/scrape.html"],
Here's our iframe overlay:
<!DOCTYPE html>
<html>
<head>
<title>Convert Image to Data</title>
<meta charset="utf-8" />
</head>
<body>
<script src="../js/scrape.js"></script>
</body>
</html>
Here's our overlay behavior:
// message.to must match this or messages from the background will be ignored
const ME = "scrape";
// transform an image into very large pixels
const deRezz = (img, canvas) => {
// how big will our squares be?
const chunkSize = 20;
// a canvas has several possible contexts; this is the one that will let us draw an image
let context = canvas.getContext("2d");
// disable image smoothing, which may give us different results for different browsers
context.imageSmoothingEnabled = false;
// draw the image full-size, to exactly fill canvas
context.drawImage(img, 0, 0, canvas.width, canvas.height);
// convert what's on the canvas to an image data object
let imageData = context.getImageData(0, 0, canvas.width, canvas.height).data;
// for our demo: downsample into big blocks of color
let row, column, pixel;
for (row = 0; row < canvas.height; row = row + chunkSize) {
for (col = 0; col < canvas.width; col = col + chunkSize) {
pixel = (row * canvas.width + col) * 4;
context.fillStyle =
"#" +
("00" + imageData[pixel].toString(16)).substr(-2, 2) +
("00" + imageData[pixel + 1].toString(16)).substr(-2, 2) +
("00" + imageData[pixel + 2].toString(16)).substr(-2, 2);
context.fillRect(col, row, chunkSize, chunkSize);
}
}
};
// a list of commands we are willing to carry out when the background process asks
const cmd = {
// stand by to
render: (message, sender) => {
// burn after reading; this function may get called multiple times if there are iframes on the page
delete cmd.render;
// create an image tag
let img = new Image();
// on load, do stuff
img.onload = () => {
// make a canvas
let canvas = document.createElement("CANVAS");
// set canvas dimensions to the image's real dimensions
canvas.height = img.naturalHeight;
canvas.width = img.naturalWidth;
// run the image through a transformation function
deRezz(img, canvas);
// throw it back to the background process, which will bounce the new image
// to the content script, which will render it in place of the original
chrome.runtime.sendMessage(
{
cmd: "handleImageData",
args: {
oldImageSrc: message.args.oldImageSrc,
newImageSrc: canvas.toDataURL("image/png")
}
},
// here we make use of the automatic response, which we get for free
response => {
// prevent message-port-closed warning from showing in console
let lastErr = chrome.runtime.lastError;
// ask the background to ask the business logic to close our host overlay
chrome.runtime.sendMessage({
to: "logic",
cmd: "closeOverlay",
// args.id is the ID of the iframe we are running in right now
args: {
id: message.args.id
}
});
}
);
};
// set the image source
img.src = message.args.oldImageSrc;
}
};
// listen for messages from the background
chrome.runtime.onMessage.addListener((message, sender) => {
// is the message for us?
if (message.to && message.to === ME) {
// do we have a valid command?
if (message.cmd && typeof cmd[message.cmd] === "function") {
// run it, passing the full message object
cmd[message.cmd](message, sender);
}
}
});
Our business logic has a cmd
object as well. It's going to handle opening and closing iframe overlays, and rendering the altered image when it arrives:
// message.to must match this or messages from the background will be ignored
const ME = "logic";
// a list of commands we are willing to carry out when the background process asks
const cmd = {
// render the altered image
renderAlteredImage: message => {
// filter an array of all images on the page
Array.from(document.getElementsByTagName("IMG")).filter(img => {
// find all images matching the source of the one we right-clicked
if (img.src && img.src === message.args.oldImageSrc) {
// swap in the updated source
img.src = message.args.newImageSrc;
}
});
},
// open an iframe overlay
openOverlay: message => {
// never open overlays inside iframes
if (window.self === window.top) {
// build an iframe
const overlayToOpen = document.createElement("IFRAME");
// shall we hide the overlay?
if (message.args.hidden) {
// make it tiny
overlayToOpen.height = overlayToOpen.width = "1";
// get ready to move it outside the visible window
overlayToOpen.style.position = "absolute";
// move it outside the visible window
overlayToOpen.style.left = overlayToOpen.style.top = "-100px";
}
// give it the ID generated by the background
overlayToOpen.id = message.args.id;
// when we're loaded, ask it to render
overlayToOpen.onload = () => {
// bounce a message through background process
chrome.runtime.sendMessage({
// this is the overlay name, not the iframe element ID
to: message.args.overlay,
// render
cmd: "render",
// send argument objects
args: message.args
});
};
// get the full path, which will be chrome-extension://deadbeefdeadbeefdeadbeef/html/overlayname.html
overlayToOpen.src = chrome.runtime.getURL(
// don't forget to list /html/overlayname.html in manifest.web_accessible_resources
"/html/" + message.args.overlay + ".html"
);
// append to the DOM so it will start loading
document.body.appendChild(overlayToOpen);
}
},
// close an iframe overlay
closeOverlay: message => {
// overlays should not have rendered in iframes
if (window.self === window.top) {
// do we know the element ID to close?
if (message.args.id) {
// look for it
const overlayToClose = document.getElementById(message.args.id);
// did we find it?
if (overlayToClose) {
// close it
overlayToClose.parentNode.removeChild(overlayToClose);
}
}
}
}
};
// listen for messages from the background
chrome.runtime.onMessage.addListener(message => {
// is the message for us?
if (message.to && message.to === ME) {
// do we have a valid command?
if (message.cmd && typeof cmd[message.cmd] === "function") {
// run it, passing the full message object
cmd[message.cmd](message);
}
}
});
Our background process has gotten simpler. It passes the same messages for runLogic
and addMenu
as before, plus an extra task to handle image data once it's been manipulated by our overlay.
// handle scraped image data
handleImageData: (request, sender) => {
// send a message to the same tab that made the request
chrome.tabs.sendMessage(sender.tab.id, {
// send only to the logic process
to: "logic",
// logic.cmd.renderAlteredImage
cmd: "renderAlteredImage",
// args to render
args: {
// the old image URL, which we will need to seek out
oldImageSrc: request.args.oldImageSrc,
// the updated data:uri our overlay built and sent back
newImageSrc: request.args.newImageSrc
}
});
}
If the message isn't for the background process it will echo back out to the sending tab:
// listen for messages from content scripts
chrome.runtime.onMessage.addListener((request, sender) => {
// do we know what tab this request came from?
if (sender.tab.id) {
// is there a command?
if (request.cmd) {
// do we need to redirect to a content script?
if (request.to && request.to !== ME) {
// send it
chrome.tabs.sendMessage(sender.tab.id, {
to: request.to,
cmd: request.cmd,
args: request.args
});
} else {
// can we carry out this command here?
if (typeof cmd[request.cmd] === "function") {
cmd[request.cmd](request, sender);
}
}
}
}
});
- Install by dragging and dropping
ep_006/v2/
andep_006/v3/
intochrome://extensions
- The v2 card should have no errors; the v3 card may show a warning about Manifest V3.
- Open a page with some images on it. I like to test on search results for "dog" on Google Image Search, because looking at pictures of dogs drops my blood pressure.
- Right-click any image on the page.
- You ought to see both the V2 Baseline and V3 Test context menu items, and they should produce the same results.
- Click and enjoy your pixelated pooch!
Hoping they sort out the unexpected behavior around the fetch API, but not holding my breath. Some bugs I've read seem to indicate that it's been around for a while.