Articles Javascript

Snowpack – How it works?

Snowpack is a post-install tool. It runs after npm install, and it essentially exists to convert your npm packages (in your “nodemodules/” directory) into JS files that run in the browser without a bundler (written to a “webmodules/” directory).

Creating a sample app

In this tutorial we are going to create a simple demo app which is going to make use of a library called finance. We are going to create an app to calculate simple interest from principal, rate and time. This application has got no fancy user interface or something. It just calls the method from the library by giving parameters and print the result in the console.

First let’s setup the application by creating the necessary files.


This is how our index.html file will look like. As you can see, there’s nothing much in the page apart from the script tag at the bottom. This script tag includes a file called app.js from the src folder of the app and having the attribute type set as module. This means that app.js is an ES module that can be used directly in the page. We will see what goes inside the app.js file later.


This is how our package.json will look like for the demo app. The important thing to note here is the dependency of finance package.

Creating a dummy package inside nodemodules

And now for this example, we are not going to actually install any npm package. We are going to create custom packages on the fly within the nodemodules folder. That’s how nodemodules work, at the end of the day all the packages are just a folder of files with a package manifest, in other words a package.json. Hence in order to create a new package all you need is two things: one package.json and the source file.

For the finance package we were talking about earlier, we are going to create the package in the same way like below.

package: finance

And the package finance will contain two functions: one for calculating simple interest from principal, rate and time and the other one including the principal. We will be using only the simpleInterest function for our demo. The other one just exists for the sake of it.


The package.json for the finance module is a normal package.json manifest with one exception. We are adding the module field to point out the ESM version of this module by telling where it resides. Since we have already written the package using ES import and export statements, the value of this field is the same as the main field which points to index.js

package: math

Now it’s time to take a look at the math package. It’s simple library exposing primitive operations like add, multiple, divide, etc., and it follows Common JS export system. The reason it is using CMD is to demonstrate the capabilities of Snowpack in handing Common JS modules. Snowpack can also bundle Common JS modules but which are actually the internal dependencies of your parent packages.

math / index.js

The below are the contents of the math library.

Now the dependency tree of our demo app looks like this.

Now install the dependencies using npm and then run snowpack.

Snowpack will read the dependencies from the package.json and start bundling them. Each individual dependency is built with all its dependent packages are flattened into a single file. As you can see below the finance and math packages is bundled into a single file in the new webmodules directory called finance.js. And this is the file which we will be consuming in our demo app.

Now if you inspect the finance.js file in your webmodules folder.

Now we can see how we can use the finance.js from the webmodules folder in our app.js

Peer Dependencies

Now what about peer dependencies? Snowpack is very well equipped for handling peer dependencies in your applications also. It will properly bundle your dependencies by putting the commonly used code such as peer dependencies inside a common folder so that the packages which are consuming these can easily access the same without redundancy.

The dependency tree of our app is very simple we have only two packages finance which is depending on math. Let’s introduce a new package called bmi which will expose methods for calculating bmi (body mass index). The bmi package is also depending on math package for its calculation. Hence our math package now becomes a peer dependency for finance and bmi.

We are going to follow the same steps for creating the bmi package just as we did for finance.


Now add the same package to the dependencies list for the demo app in package.json

The dependency tree of our demo will now look like this:

Now install the dependencies using npm and then run snowpack.

You can optionally add “snowpack” as a “prepare” script to your package.json and npm/yarn will automatically run it after every new dependency install. This is recommended so that new dependencies are automatically included in your webmodules/ directory immediately.

After the installing and running snowpack the bundled files inside webmodules directory will be three Javascript files. One for the bmi package, one for the finance package and we now have common directory which contains the common code in the file named index-093dfa0c.js used by both the packages, which is actually the math package code.

If you inspect the contents of the file in the webmodules folder you can see yourself that Snowpack changed both the bmi and finance package to import from the common math package bundled.

This is how the bundled bmi package will look like now.

And this is how the bundled finance package will look like.

And if you are curious what goes inside the common index file, as I mentioned previously it just contains the code of the math package.

Now we can import the bmi package into our demo application from the webmodules folder like below:

Production Builds

Snowpack is not only optimized for development environment but also for production builds. You can create compressed or minified version of your dependencies for using in production environments and deploying with Snowpack. It also generates source maps when you are bundling for production. All you need to do is to pass the –optimize flag while running Snowpack.

Tree Shaking

Snowpack helps you to remove any unused code from your dependencies (when “Automatic Mode” is enabled via the –include flag). In order for Tree shaking to work properly we need to have ESM compatible versions for all your packages. Since our math package is based on Common JS module system, we need to have a separate ESM version for the same like below.

It is actually quite easy all you have to do is to convert them using export syntax for each methods which are exported from the math package instead of using the module.exports

And you also need to make some changes with the package.json of the math package by exposing the ESM version using the module field.

Now if you run Snowpack again with the –include flag with the app.js file.

You will have your math package properly tree-shaked or the unused subtract methods from the package will be removed since it is not used by any of the dependencies.

That’s all from this tutorial. Hope you have a better idea now about how Snowpack bundles your dependencies in your applications from the above examples. Please let me know for any issues or feedback about the article in the comments.

Cover Image by Chris Biron on Unsplash


Everything you need to know about the native lazy load in Chrome

Everything you need to know about the native lazy load

The lazyload feature allows developers to selectively control the lazyload attribute on <iframe>and <img> using the Feature-Policy header or the <iframe> “allow” attribute. This provides more control over loading delay for embedded contents and images on a per origin basis. Web developers can use this policy to force loading to delayed or non-delayed for your entire website and all or parts of any embedded contents. This will give a significant boost in page load times.

LazyLoad is a Chrome optimization that defers loading below-the-fold images and certain third-party iframes on the page until the user scrolls near them, which reduces data usage, speeds up page loads, and reduces memory use. For frames, third-party iframes that are meant to be shown to the user are deferred, leaving alone frames that are used for analytics or communication according to heuristics. For images, LazyLoad inserts placeholders with appropriate dimensions, which it gets by issuing a range request for the first few bytes of the image. When the user scrolls, deferred frames and images that will be visible soon start loading.

The resources Chrome consumes (e.g. users’ time, data, or phone memory) should be proportional to the user value it provides. For page loads, this means that Chrome shouldn’t waste any of these resources loading content the user will never see. For instance, pages often require scrolling to get to the bottom, and users don’t always scroll that far.

For foreground page loads, content that is not visible because it is below the fold should commence loading just in time to complete when scrolled to it.

The specific goal of this project is to greatly reduce the number of image and third-party iframe resources necessary to load a site by using visibility predictions to trigger their loading.

Key metrics of interest are network data, load latency performance and memory savings.

With the lazyload attribute, developers could prioritize the loading of different inline frames and images on a web page. This however could become a cumbersome process and not quite scalable for larger web sites specially given that applying the attribute is origin-agnostic. The lazyload policy aims to resolve this issue but changing a browser’s decision on enforcing lazyload attribute for a browsing context and its nested contexts.

Proposed Solution

A new policy-control feature for lazyloading will alter lazyload behavior for a browsing context and its nested contexts. The feature will potentially modify the behavior of user agent towards the lazyload attributed value for nested resources. Essentially, when the feature is disabled for an origin, then no resources inside the origin can escape lazyloading by setting lazyload=”off”. Specifically, if for a resource the lazyload attribute is set to:

  • on: Browser should load the resource lazily.
  • off: Browser ignores the attribute value and assumes auto.
  • auto: There is no change in browser behavior.

This feature could be enforced either in the HTTP header or by using the allow attribute of an inline frame.

Using the feature

This feature can be introduced with the HTTP headers. For instance,

would not allow synchronous loading for any <iframe> or <img> (that is not yet in the viewport) from origins other than’self’ or

Similarly, the feature could be set through the allow attribute of an inline frame:

which disregards lazyload=’off’ for all the origins including the <iframe>‘s origin itself.


The LazyFrames mechanism defers certain iframes from being loaded until the user scrolls near them. LazyFrames attempts to target third-party iframes that are intended to be shown to the user (e.g. banner ads). First-party iframes aren’t targeted because these frames share javascript context with the embedding page. LazyFrames also avoids deferring iframes that are likely to be used for communication (e.g. for social media widgets) or analytics to avoid breaking their functionality, according to heuristics (e.g. tiny iframe dimensions, “display:none”, etc.).

Which iframes should be deferred?

An iframe will be deferred if it satisfies all of the following:

  • It’s a third-party iframe (i.e. a different origin than the embedding page),
  • Larger than 4×4 in dimensions,
  • Not marked as “display:none” nor “visibility:hidden”,
  • Not positioned off-screen using negative x or y coordinates

LazyFrames defers frames by waiting to call FrameLoader::Load() until an installed IntersectionObserver fires when the user scrolls near the frame.


The LazyImages mechanism defers loading of images until the user scrolls near them. To preserve the layout and avoid reflow jank when loading images in, LazyImages inserts appropriately sized rectangular placeholders where the images will be, using the image placeholder mechanism that was created for the LoFi feature.

To achieve this, Chrome will issue range requests for just the first few bytes of images, and then attempt to extract the image dimensions from these chunks and display placeholders with the same dimensions in their place.

When Chrome determines that it should attempt to display a placeholder image, it should attempt to get the image in the following ways in order of precedence:

  • If the full image is present and fresh in the cache, then use that.
  • Otherwise, if the server supports range requests, and the image dimensions can be decoded from the first 2KB of the image, then generate and show an image placeholder with the same dimensions.
  • Otherwise, fetch the entire full image from the server as usual.

If the original image is a progressive JPEG, and the first 2KB range of the image contains the full low-resolution version of the image, then Chrome will consider using that low-resolution version of the image as the placeholder.

Platform Support

Since there is a chance that LazyLoad could negatively affect user experience on some iframes and images, a per-element lazyload attribute will be provided for determining the policy for frames and images, to allow a page to optionally indicate to the browser if an iframe or image is well or poorly suited for lazy loading. By default, the browser will decide which frames and images on the page should be lazily loaded.

There will also be a way to control LazyLoad page-wide using feature policy.

Enabling LazyLoad

At the time of writing LazyLoad is available in only available in Chrome Canary, behind two required flags:

Flags can be enabled by navigating to chrome://flags in a Chome browser.


Articles Javascript

HTML Form Participation API Explained

Form Participation API Explained

Web applications often maintain state in JS objects that have no direct DOM representation. Such applications may want such state to be submittable.

Existing form elements map one field name to many values. People often build custom controls precisely because those controls hold more complex values that would be better represented as many names to many values. Subclassing existing form elements don’t get you this.

And inheriting from HTMLInputElement is insane (not because inheriting is insane, but because HTMLInputElement is insane), so that’s not really how we want author-defined objects to become submittable.

Given the above it is not recommended that, we should try to solve the “how authors can participate in form submission” problem by enabling the subclassing of existing form elements. Instead, we should define a protocol implementable by any JS object, which allows that JS object to expose names and values to the form validation and submission processes.

The ‘formdata’ event enables any objects to provide form data. It helps avoid creating <input type=hidden> representing application state, or making submittable custom elements.

The Form Participation API enables objects other than built-in form control elements to participate in form submission, form reset, form validation, and so on.


  • Arbitrary objects can participate in form submission
  • Autonomous custom elements can associate with a form in a way same as built-in form control elements.


  • Provide ability to imitate built-in form control elements perfectly. e.g. Create <input>-equivalent element with autonomous custom element.

API Proposal

There are two sets of API.

Proposal A – Generic Form Participation API which supports both of custom elements and non-elements

Proposal B – Form Participation API specific to custom elements.

API Proposal A – Event-based participation

Sample code

API Details – Addition to HTMLFormElement

‘formdata’ event This event is dispatched synchronously on a form element when ‘construct the entry list’ algorithm for it is invoked. The event bubbles, and is not cancelable. The event object has formData IDL attribute, of which interface is FormData. Event listeners may add entries to the formData attribute.


The formData object must be identical in all event listeners. So an event listener can read entries set by other event listeners if we use FormData interface. If we’d like to avoid such peeping, we should introduce new interface which is a kind of write-only FormData or add write-only mode to FormData.


We don’t think we need to introduce such write-only interface. Preventing such peeping doesn’t make much sense because anyone can access any entries through ‘new FormData(form)’


Should the event be dispatch before iterating controls, or after iterating controls? If we allow the peeping, dispatching after the iteration would be useful.


Due to the above RESOLUTION, we should dispatch it after the iteration.

Feature Detection

Check existence of HTMLFormElement.prototype.onformdata or window.FormDataEvent.

API Proposal B – For custom elements

Proposal A is enough in many cases. However it’s not easy to support ‘form’ content attribute behavior and <label>/<fieldset> association with the approach of Proposal A. Proposal B is an alternative API specific to autonomous custom elements. It provides an easy way to participate in forms.

Sample code

API Details – Define form-associated custom elements

customElements.define(name, constructor) If the prototype of the specified constructor has one of ‘formAssociatedCallback’ and ‘disabledStateChangedCallback’, the defined element is marked as a form-associated custom element, and the prototype must provide ‘value’ setter too.

ISSUE: Should we introduce new option to define() like:

customElements.define(‘my-control’, MyControl, { formAssociated: true });

The defined elements will have the following capabilities:

Parsers automatically associate them with a <form> like built-in form-associated elements, and DOM mutation functions such as appendChild() associate form-associated custom elements with a <form>, and calls formAssociatedCallback later.

form.elements, form.length, fieldset.elements contain form-associated custom elements.

In constructing the entry list algorithm, UA creates an entry with name attribute value of the form-associated custom element, and the value set by setFormControlValue() of HTMLElementPrimitives interface. Authors don’t need to register ‘formdata’ event handlers. setCustomValidity() affects form validation.

<label> can search them for a labeled control.

API Details – Functions which form-associated custom elements should provide

void createdCallback(HTMLElementPrimities primitives) If the custom element has a property named ‘createdCallback’ and it’s callable, it is called after the constructor. UA creates an HTMLElementPrimitives instance for this custom element, and passes it as an argument of createdCallback callback.

HTMLElementPrimitives interface provides API to support implementing elements.

setFormControlValue() is used to tell state of form-associated custom elements to UA. setFormControlValue() should be called whenever the value of a form-associated custom element is updated. If the element has non-empty name content attribute, the specified value is appended to entry list in form submission. The specified value is used for UA autofilling

If a form-associated custom element don’t want to use name content attribute, or want to submit multiple entries, it should specify entrySource argument. entrySource is a sequence of objects, and The length of the sequence must be multiples of 2 Objects at 2n index (0, 2, 4, 6, …) must be nullable DOMString Objects at 2n+1 index (1, 3, 5, 7, …) must be nullable FormDataEntryValue If entrySource is specified, each pairs in entrySource will be added to entry list in form submission if entrySource[2n] is not empty.

ISSUE: Should we introduce new interface for a pair of DOMString and FormDataEntryValue? ‘[name1, value1, name2, value2, …]’ is simpler than ‘[new FormDataEntry(name1, value1), new FormDataEntry(name2, valeu2), …]’

For example, a form-associated custom element contains three editable fields, and we want to submit cc-cardno=value1 cc-expire=value2 and cc-cvc=value3 if name content attribute value is ‘cc’. We should run code like the following whenever any value is updated or name content attribute is updated:

NOTE: We should not pass an HTMLELementPrimitives instance via the constructor instead of new callback createdCallback because it would make ‘new MyControl()’ not doable. Also, ‘connectedCallback’ is not suitable. ‘new FormData(form)’ for <form> in an orphan tree should collect values of form-associated custom elements.

The purpose of HTMLElementPrimitives interface is to provide API which custom element implementations can call, but it’s difficult for custom element users to call.

void formAssociatedCallback(HTMLFormElement? form)

If the custom element has a property named ‘formAssociatedCallback’ and it’s callable, it is called on CEReaction timing after UA associated the element with a form element, or disassociated the element from a form element.

void disabledStateChangedCallback(boolean disabled)

If the custom element has a property named ‘disabledStateChangedCallback’ and it’s callable, it is called on CEReaction timing after an ancestor <fieldset>’s disabled state is changed or ‘disabled’ content attribute of this element is added or removed. The argument ‘disabled’ represents new disabled state of the element.

set value(v)

A form-associated custom element implementation must provide a value setter. UA’s input-assist features such as form autofilling may call this setter.

Feature Detection

Check existence of HTMLElementPrimitives.prototype.setFormControlValue.

Changes to other API

Move setCustomValidity(), validationMessage, and reportValidity() to HTMLElement If the context object is neither a listed element nor an autonomous custom element, InvalidStateError is thrown for setCustomValidity() and reportValidity(), and validationMessage returns an empty string. For autonomous custom elements, setCustomValidity() with non-empty string makes element’s validity state ‘invalid’.

Considered alternatives

Implement one of Proposal A and Proposal B, or both?

Though Proposal A can handle both of elements and non-elements, it’s not the best API for elements, and it’s not easy to support some form-related features. For example, it’s very difficult to support <fieldset> and <label> association with the Proposal A approach.

If we don’t support non-element participants, we can drop Proposal A and implement only Proposal B with synchronous FormData callback. If we’d like to minimize complexity, we can drop Proposal B because Proposal A can support both of element and non-element participants.

Alternatives of Proposal B

Authors need to tell “this element will participate in a form” to UA before the element is connected to a document tree In order to participate in a form by markup / DOM structure. The current Proposal B is one of ways to tell it, and it uses customElements.define(). Possible alternatives would be: If an element has a ShadowRoot, and the ShadowRoot has specific callback functions, the element is a participant. Introduce specific content attribute to the element. e.g. <my-control participant=…> A custom element with name content attribute is a participant. Introduce explicit function like document.makeElementParticipatable(element) Should form-associated custom element provide ‘form’ IDL attribute? Should we introduce new base class like HTMLFormControlElement?

TODO or not TODO

Support form-related CSS selectors

This needs an API to request style re-computation explicitly as well as a callback to query element status. e.g.

sequence<DOMString> matchedPseudoClassCallback()

This callback is called just before starting style computation. The return value is a sequence of pseudo class names to be matched. e.g. [‘:invalid’, ‘:out-of-range’]


A custom element implementation calls this when pseudo class state is changed.

Articles Javascript

A Refreshing Guide to Object.freeze in Javascript by Dr.Victor Fries

A Refreshing Guide to Object.freeze in Javascript by Dr.Victor Fries

What killed the dinosaurs? The Ice Age!

In JavaScript, objects are used to store keyed collections of various data and more complex entities. Objects penetrate almost every aspect of the JavaScript language.

The object might be accessed as global or passed as an argument. Functions that have access to the object can modify the object, whether intentionally or accidentally. To prevent modification of our objects, one of the techniques is to use Object.freeze().

Freezing an object can be useful for representing a logically immutable data structure, especially if changing the properties of the object could lead to bad behavior elsewhere in your application.

Allow me to break the ice: My name is Object.freeze(). Learn it well, for it’s the chilling sound of your doom.

The Object.freeze() method freezes an object: basically it prevents four things from an object:

The method returns the passed object.

Let’s kick some ice!

Tonight’s forecast… a freeze is coming!

Nothing can be added to or removed from the properties set of a frozen object. Any attempt to do so will fail, either silently or by throwing a TypeError exception (most commonly, but not exclusively, when in strict mode).

For data properties of a frozen object, values cannot be changed, the writable and configurable attributes are set to false. Accessor properties (getters and setters) work the same (and still give the illusion that you are changing the value). Note that values that are objects can still be modified, unless they are also frozen. As an object, an array can be frozen whereafter its elements cannot be altered. No elements can be added or removed from it as well.

The function returns the passed object. It does not create a frozen copy.

Tonight, hell freezes over! (Freezing Objects)

I’m putting array on ice (Freezing Arrays)

The object being frozen is immutable. However, it is not necessarily constant. The following example shows that a frozen object is not constant (freeze is shallow).

To be a constant object, the entire reference graph (direct and indirect references to other objects) must reference only immutable frozen objects. The object being frozen is said to be immutable because the entire object state (values and references to other objects) within the whole object is fixed. Note that strings, numbers, and booleans are always immutable and that Functions and Arrays are objects.

Freeze in hell, Batman! (The Shallow Freeze)

The result of calling Object.freeze(object) only applies to the immediate properties of objectitself and will prevent future property addition, removal or value re-assignment operations only on object. If the value of those properties are objects themselves, those objects are not frozen and may be the target of property addition, removal or value re-assignment operations.

Everything freezes! (The Deep Freeze)

In this universe, there’s only one absolute… everything freezes!

To make an object immutable, recursively freeze each property which is of type object (deep freeze). Use the pattern on a case-by-case basis based on your design when you know the object contains no cycles in the reference graph, otherwise an endless loop will be triggered. An enhancement to deepFreeze() would be to have an internal function that receives a path (e.g. an Array) argument so you can suppress calling deepFreeze() recursively when an object is in the process of being made immutable. You still run a risk of freezing an object that shouldn’t be frozen, such as window.

Object.freeze vs const

const and Object.freeze are two completely different things.

The const declaration creates a read-only reference to a value. It does not mean the value it holds is immutable, solely that the variable identifier can not be reassigned.

const applies to bindings (“variables”). It creates an immutable binding, i.e. you cannot assign a new value to the binding. Object.freeze works on values, and more specifically, object values. It makes an object immutable, i.e. you cannot change its properties.

In ES5 Object.freeze doesn’t work on primitives, which would probably be more commonly declared using const than objects. You can freeze primitives in ES6, but then you also have support for const. On the other hand const used to declare objects doesn’t “freeze” them, you just can’t redeclare the whole object, but you can modify its keys freely. On the other hand you can redeclare frozen objects.

Object.freeze vs Object.seal

Objects sealed with Object.seal() can have their existing properties changed. Existing properties in objects frozen with Object.freeze() are made immutable.

The following related functions prevent the modification of object attributes.

Function Object is made non-extensible configurable is set to false for each property writable is set to false for each property
Object.preventExtensions Yes No No
Object.seal Yes Yes No
Object.freeze Yes Yes Yes

Winter has come at last

Yes! If I must suffer… Humanity will suffer with me! I shall repay them for sentencing me to a life without human comfort. I will blanket the city in endless winter! First… Gotham. And then… The world!

Hope, you enjoyed the article and learned something new about Object.freeze() in Javascript. Please show us your love by sharing the article and let us know your views in the comments.