If you’ve been following my blog or WordPress development for some time now, you’d know by now that the Gutenberg project contains a number of hidden gems that you, as a JavaScript developer or a WordPress developer can benefit from in your own standalone applications or WordPress plugins.
One of these hidden gems is the keyboard shortcuts package, a utility package that allows you to do all sort of things related to keyboard shortcuts, between adding keyboard shortcuts, removing keyboard shortcuts, updating them and more.
As a standalone package.
Like any package in the gutenberg package, it’s distributed as a standalone npm package that you can consume and use in any React application. Here’s a basic example (Full example available on Stackblitz):
So to use the package, first, you need to wrap your React application in a ShortcutProvider. (The shortcuts will only work if the active element is inside the shortcut provider in the DOM tree)
The next step is to register your shortcuts using the registerShortcut action. This function call declares the existence of your shortcut in your shortcuts store, providing the keyboard combination used to trigger the shortcut behavior and some additional metadata, like a description and a category.
It’s a good practice to register all your shortcuts when initially rendering your application component.
That said, for the shortcuts to actually perform any action, we need to define their callbacks, and we do so using the useShortcut react hook, the first argument being the shortcut name and the second the function that is called when the key combination is pressed.
Note: The reason registering shortcuts and using shortcuts are separate is because some shortcuts are contextual to specific actions but the application always need to know about the existence of a shortcut during its lifecycle. For instance, all shortcuts can be rendered in a help panel but all the shortcuts might not be active. For example, a “copy” shortcut in an editor is only active if there’s a selection.
The keyboard shortcuts package allows using single character shortcuts or shortcuts with one or two modifiers. These are all examples of valid key combinations:
{ character: 'a' } is equivalent to pressing a.
{ character: 'del' } is equivalent to pressing the delete button.
{ character: 'a', modifier: 'primary' } is equivalent to pressing command + a on Mac.
{ character: 'a', modifier: 'primaryShift' } is equivalent to pressing command + shift + a on Mac.
{ character: 'a', modifier: 'alt' } is equivalent to pressing option + a on Mac.
As a WordPress API
In WordPress, the keyboard shortcuts package is used to register all the block editor shortcuts, but you can also use it to register your own custom shortcuts in your block or any WordPress plugin or page using the same API. It is available in the wp-keyboard-shorctuts wordpress script. If you plan on using it, in your plugin’s scripts, make sure to add the wp-keyboard-shorctuts as a dependency to your custom scripts:
The wp.keyboardShortcuts global variable will be made available for you with all the APIs exposed by the package: useShortcut, ShortcutsProvider…
Super Powers
Editing keyboard shortcuts
An important aspect of implementing keyboard shortcuts in any application is to define the right key combinations for the right shortcut. That said, it is surprisingly hard to come up with combinations that work for everyone, in all browsers and all operating systems. For that reason, the keyboard shortcuts package allows the possibility to update the registered shortcuts and change their keyboard combination.
import { useSelect, useDispatch } from '@wordpress/data';
import { store } from '@wordpress/keyboard-shortcuts';
function ToggleIncrementShortcut() {
const { getShortcutKeyCombination } = useSelect( store );
const { registerShortcut } = useDispatch( store );
const toggleShortcut = () => {
const currentCharacter = getShortcutKeyCombination( 'mycounter/increment' )?.character;
const nextCharacter = currentCharacter === 'a' ? 'i' : 'a';
registerShortcut( {
name: 'mycounter/increment',
description: 'Increment the counter',
keyCombination: {
character: nextCharacter
}
} );
}
return (
<button onClick={ toggleShortcut }>
Toggle between "i" and "a"
</button>
);
}
So as we can see in the example above, registering the shortcut again, overrides its current definition allowing us to update any of its properties including the key combination.
💡 In the future, the block editor might provide this automatically but until then, there’s an opportunity here for a WordPress plugin that provides a UI to edit the existing shortcuts.
Building a performant editor is a very difficult task, it requires constant attention and monitoring to some key metrics. In the context of the WordPress block editor (aka Gutenberg), we constantly track the following key metrics:
Loading time: The time it takes from the moment the user clicks the “new/edit post” link until the editor loads the post to be edited and becomes responsive to user input.
Average Typing time: The time it takes for the browser to respond to characters being typed. This is one of the important metrics for an editor, this measure should ideally be very small, the user shouldn’t even notice any delay or lags.
We also track some secondary metrics that are specific to block editors including:
Block Selection time: In a block editor, everything is a block and the user constantly jumps from one block to another. With this metric, we track the time it takes for the browser to respond when selecting a new block.
Global Inserter Opening time: This tracks the time for the browser to respond when opening the global inserter (or the block library), showing the available blocks.
By constantly keeping track of these numbers while iterating on features and bug fixes for the editor, we managed to improve the performance of the editor drastically over time. In a previous post, I shared some of the techniques we used to make these leaps forward.
That said, one of the important aspects of WordPress and its block editor is their extensibility. The WordPress plugins repository contains thousands of plugins to super charge your WordPress Installation and editor. A typical WordPress install has at least a dozen of active plugins. And of course, plugins have costs: the editor needs to be performant intrinsically but also stay performant as you extend it.
Unfortunately, depending on the used plugins, this is not always the case.
Popular WordPress Plugins
The first report compares the metrics of 8 of the most popular WordPress Plugins on the repository in addition to Gutenberg itself (Gutenberg is always enabled in all the tests). Here’s the list of the tested plugins:
Gutenberg v11.3.0 RC 1
Akismet v4.1.10
Contact Form 7 v5.4.2
Elementor v3.3.1
Jetpack v10.0
Really Simple SSL v5.0.8
WooCommerce v5.5.2
WPForms Lite v1.6.8.1
Yoast v16.9
Results
Plugins
Loading time
Average Typing time
Gutenberg
4318ms
45.13ms
Akismet
-0.57%
+4.25%
Contact Form 7
+4.15%
+3.92%
Elementor
+10.31%
+3.51%
Jetpack
+19.48%
-22.42%
Really Simple SSL
-0.65%
+1.84%
WooCommerce
+16.05°%
+6.51%
WPForms Lite
+5.52%
+20.05%
Yoast
+25.29%
+3.17%
Observations
Here are some of my own take-aways from the numbers above.
Most of the popular plugins have no impact or a reasonable impact on the loading and typing times of the editor.
WooCommerce, Yoast and Jetpack have a noticeable impact on the loading time.
Surprisingly, the Typing metric is faster when using Jetpack.
Analysis
I think the results above are good news for WordPress. Most popular plugins don’t have a big impact on the editor’s performance.
The plugins that deal the most with the editor: adding blocks, adding meta boxes or sidebars or extending the editor via slots… are the ones that are impacting the loading time a bit. They might be using extra JavaScript and Stylesheets in the editor.
Note: I tried including WordFence Security plugin in my benchmark, unfortunately, by default that plugins had a very big impact on the loading time which triggers timeout errors when running the performance tests for it. I was not able to gather the numbers for that particular plugin.
Block Editor Plugins
The second report compares the editor metrics for 10 of the most popular plugins that target the block editor specifically, whether it’s block library plugins, or plugins enhancing the editor with tools and customization options. The list of the compared plugins is the following:
Gutenberg v11.1.0 only
CoBlocks v2.16.0
Editor Plus by Extendify v2.8.2
EditorsKit v1.31.5
Getwid v1.7.4
Gutenberg Blocks and Template Library by Otter v1.6.9
Kadence Blocks v2.1.7
Redux v4.2.14
Stackable v2.17.5
Starter Templates v2.6.21
Ultimate Addons for Gutenberg v1.24.0
Results
Plugins
Loading Time
Typing Time
Block Selection Time
Inserter Opening Time
Gutenberg
4237ms
53.85ms
58.23ms
59.96ms
CoBlocks
+255.11%
-12.44%
+65.88%
+14.29%
Editor Plus
+1064.99%
-21.15%
+171.70%
+404.59%
EditorKit
+66.21%
+77.20%
+17.29%
+52.94%
Getwid
+3.75%
+2.21%
+9.07%
+1.08%
Otter
+36.30%
+22.28%
+12.83%
-1.13%
Kadence Blocks
+4.58%
+4.46%
+14.03%
-2.75%
Redux
+73.93%
+104.64%
+16.33%
+14.76%
Stackable
+19.58%
+10.06%
+6.80%
+20.60%
Starter Templates
+6.84%
+9.68%
+9.87%
+5.90%
Ultimate Addons
+13.73%
-20.43%
+9.24%
+15.23%
Observations
Here are some my own take-aways from the numbers above:
No surprise that these plugins have a more visible impact on the numbers since they specifically target and extend the block editor.
The loading time is not consistently impacted by the block library plugins, some are doing better than others.
Editor Plus and EditorKit impact all editor metrics significantly.
Analysis
Editor Plus and EditorKit are plugins that add customization capabilities to the block editor in very different ways, they impact core blocks, add blocks and add tools to interact with the editor. Based on my experience, adding this kind of built-in features to the editor can quickly have a broad impact on performance because it can impact all rendered blocks. These are very valuable plugins, but I do think they’d require more care than typical plugins when it comes to performance. Tracking editor metrics for this kind of plugins is key.
Lazy-loading editor assets (JS/CSS) is something we want to explore ultimately in the editor to keep the bundle size and loading time contained, but the loading time numbers here suggest that it’s not a fundamental issue in the block editor itself, since some block libraries do add a number of blocks (assets) without a meaningful or a big impact on the loading time. Plugins like CoBlocks, Redux or EditorPlus might be up for some quick wins there.
In a previous version of this benchmark, I was noticing that most block libraries had a significant impact on the inserter opening metric. This led to some improvements to Gutenberg Core itself. Inserter items are lazy-rendered now meaning adding more and more items doesn’t impact the numbers as much as the numbers above confirm.
Note: A frontend metric would be a great addition to the key metrics to monitor for block library plugins, it’s often more important than editor-related metrics.
Methodology of the test
The tests were run sequentially on the same idle computer using @wordpress/env and the Gutenberg e2e performance job.
For each plugin, I didn’t configure it or enable/disable features, I just went with the default settings considering that most users are going to use the default settings and that they should be performant by default.
The editor is loaded and used with a particularly sizeable post (~36,000 words, ~1,000 blocks).
Of course, this is not a scientific method but based on my previous experience with these metrics, the numbers are meaningful with a margin of error of 5% to 10% to account for the randomness of CPU usage/timing of the test.
One of my main motivations for this post was to highlight these issues more and encourage plugin authors to monitor the performance impact of their code. Performance should be considered a first-class feature. Of course the key metrics for each plugin might differ but a good first step is to run the Gutenberg metrics with or without your plugins and compare the results.
Here’s how you can do it on your own:
First, clone the Gutenberg repository and build it:
git clone git@github.com:WordPress/gutenberg.git
cd gutenberg
nom install
npm run build
The next step is to run the WordPress + Gutenberg environment. We can just use the Gutenberg’s built-in environment like so: (Docker Desktop is a requirement here)
npm run wp-env start
You should be able to access the testing environment on http://localhost:8889
Install your plugin on the environment above directly from WPAdmin and activate it.
You’re now ready to run the tests like so:
npm run test-performance packages/e2e-tests/specs/performance/post-editor.test.js
And that should be it, you can run the tests as often as you wish, try different variations of your plugin, disable it, compare to other plugins…
Some Hints
While working on performance improvements on the editor, we noticed some trends that can help you find the bottlenecks for your own plugins:
For the loading metric, consider checking the initial rendering of your components/UI, sometime deferring the initial rendering of non important UIs help.
The size of the loaded assets can also have an impact on the loading metric.
For the typing metric, consider checking your selectors (wp.data.useSelect, wp.data.withSelect, wp.data.subscribe calls). My previous post goes into more details here.
Chrome Performance Monitoring tools is a great way to detect and debug performance regressions. One approach I personally use often, is to record a trace for a given interaction (like type a single character, open the inserter, load the page or any interaction you want to debug) and compare the resulting trace with and without your plugin.
Notes
An initial version of this benchmark resulted in very different numbers from the ones we have today, I’ve reached out to some of the plugin authors and shared the numbers with them. I’d like to thank all of them, as they were all receptive. Some gains are already reflected in the new versions tested above and I know that the authors of the plugins above are working on more improvements in the upcoming weeks.
If you run the same benchmarks locally, you might get different numbers and that’s totally fine. The machine running the test have an impact there, for instance your docker instance maybe faster but your browser slower resulting in numbers and rates that are different. That said, comparing to your own Gutenberg numbers should be relatively stable compared to the numbers on this post.
Conclusions
I would like to finish by encouraging folks to care about performance daily on their development workflows. For Gutenberg Core Itself, this post highlighted for me some good additions to include to our performance pipeline and some areas worth debugging.
Let’s make WordPress and its plugins blazing (Everyone is using this word lately, I finally found a place for it 😀) fast.
Building a website these days is all about finding the right balance between a coherent and consistent design across the website and customizations capabilities that allows specific content to shine.
Far is the era where everything was customized manually (remember Dreamweaver and Frontpage?). CSS came to be, and different iterations on top of it, guidelines and frameworks exist today to ensure this consistency. Some developers still use Bootstrap, others use Tailwind, many build their own design systems. Design system, that’s a big word and a big trend these days, a promise of a coherent set of guidelines and components that can be used to ensure developers, designers and content creators are aligned and share the same expectations.
How does this translate in a CMS? How does this translate in WordPress, which runs more than 40% of the world’s websites? The answer has always been themes. While themes mean different things to different folks and have been used in different ways by different people, in their essence, they are what provides the consistency in the design of the website. They also define what content creators can or cannot customize since the degree of freedom granted to content creators might differ from one website to another depending on the context.
While WordPress continues to push its block editor and starts introducing new systems like Full Site Editing, themes live on and will remain the main entry point to define the design system and the shared guidelines for content creators.
How does this translate in a block world?
Initially, the block editor just embraced the classic WordPress APIs and approach. This meant that in order for a theme to define shared guidelines and settings, it has to rely on a set of available theme support flags. And for the design language and styles, the usage of CSS to override the default CSS styles provided in blocks is needed.
Quickly the limitations of this approach appeared:
The block editor has a lot more customization capabilities compared to the classic editor by default, theme support flags do not scale properly and do not provide the flexibility required to control these capabilities properly (per block, per context,…)
Blocks come with built-in CSS, and overriding the CSS to match the theme’s flavor is no easy task given the number of variations the blocks can have.
This is where Global Styles and Global Settings come in (we also talk about theme.json config to refer to these two APIs). What are these new concepts and how do they affect block and theme authors?
Theme authors
So in order to allow theme authors to provide these shared settings, WordPress and the Gutenberg project introduce the theme.json file. It’s a file that lives at the root of the theme folder and defines two important keys: settings and styles.
Settings
The settings are a list of global or contextual configuration values that define how the editor and blocks behave: which controls are disabled by default and hidden from the UI, which ones are visible. It also defines the color palette, the default typography presets (font sizes, font families…) available for editors to pick from.
In the example above, the theme is forbidding the use of dropCap in the UI for all blocks making use of that setting.
Settings can also be more granular and contextual to specific blocks to supports use-cases like disabling colors everywhere but enable them only for a specific block. For such use-cases, we just use the block name to define specific settings.
The styles section on the other hand is about defining the design language of the theme. It allows theme authors to define the default color, font size, line height, font family, link colors, heading sizes… At render time, it is translated into a CSS style sheet that is injected into the frontend and the editor.
In this example, I’m defining the color of all link (a) elements across blocks.
In the same way, I can override these styles for a specific block. In the following example, I set the default background for buttons as blue with a white text color.
It’s also important to note that by using Global Styles and theme.json, editor styles generated from the theme config will be automatically loaded into the editor. Also, the presence of theme.json file in your theme directory is an indicator for the block editor to embrace a simpler markup for some blocks like the group block.
Block authors
The shared settings and styles above work across Core blocks but third-party block authors can also support these in their blocks.
Settings
In order for a block to embrace Global Settings in its editor UI, a dedicated React hook called useSetting can be used:
// Somewhere in your block's edit function.
// Retrieve the value of the dropCap setting
const isEnabled = useSetting( 'typography.dropCap' );
if ( ! isEnabled ) { return null };
Return <ToggleControl ... />
In this example, we’re retrieving the value of the typography.dropCap setting and if the dropCap is enabled, we show the corresponding UI to allow content creators to use a drop cap.
That’s it: all settings can be accessed in the exact same way. For a complete list of settings available to the block authors, take a look at this reference.
Styles
Global Styles on the other hand should work mostly by default in all blocks thanks to the CSS cascade. Global Styles work by generating and injecting CSS based on theme configuration in theme.json file or saved user configuration (for FSE themes only).
That said, some blocks opt out of the generated class name which means the global styles style won’t work in this case. For these kind of blocks, a selector must be provided in the block.json config.
For this kind of blocks, the selector will be used instead of the generated class names to generate the styles.
Note: It’s important to note that a block can actually be styled using global styles even if it doesn’t provide UI for the user to edit these styles. In most situations though, support for these customizations in the UI can be added quickly to any block, static or dynamic, thanks to the Block Supports API. Also, when using Block Supports, blocks automatically adhere to the Global Settings discussed above as well.
When can I start using these?
The APIs mentioned are available when the Gutenberg plugin is active are targeted to land in Core as stable in the upcoming WordPress 5.8 release slated for July 20. If you’re a theme or block author, it’s time to start familiarizing yourself with these APIs.
Writing software is easy, sustaining it for years is harder, and doing it for open-source software is a challenge. Here’s a story about my journey to help build the WordPress block editor from a maintainer’s perspective, a perspective probably invisible to most developers and contributors (unless you’re an open-source project maintainer).
The fun part
As a long time developer, my main motivation is to ship features, write software, and put it in the hands of the users to help them achieve their goals. So when I heard about WordPress thinking about building a new editor from scratch, I immediately understood how impactful that project could be, given the scale of WordPress and the central place the editor occupies in any CMS. Thus, I volunteered very early on to join these efforts and was delighted to learn that I was part of a small team of people who were being sponsored to lend a hand there.
Starting a project from scratch is an opportunity most developers enjoy. The early days are where the fun happens: you get to define the structure, the guidelines of the project, the code style. You get to choose the technologies to be used and participate in the early prototypes. You define the base APIs and you get to engage in early design discussions. And more importantly, for a developer motivated by impact, you get to ship software at a very high pace. It is very rewarding.
And this is exactly what happened with the block editor project. Early on, we were a group of a dozen folks showing up on #core-editor weekly meetings, we worked on several prototypes, we created the base UI components ,we defined the desired base Block API and block-based format, and we eventually achieved an important milestone where the block editor could be shared with users and third-party developers as a beta version.
The growing part
The initial release of an open-source project is one of the most rewarding moments of the timeline. Early adopters get to enjoy your work, provide feedback you can act on. People start asking for features via issues and, with your existing knowledge of the project, your voice matters. Some of them can also provide their own contributions and open pull requests. Your feedback is important there since you have worked on the original APIs and architecture of the project. Eventually, you become an expert, a public figure, you get respected (or hated) for your work, but your input becomes necessary.
For the block editor project, there were a few of us in this situation and we were able to reasonably balance receiving feedback, acting on it, and making substantial improvements and iterations to the overall project.
At this point in time, you also start to be careful about public APIs. As any WordPress feature, the block editor is meant to be extensible at its core. Third-party developers can write custom blocks, and extend the editor in a number of ways. As we started getting more users for our plugin (beta product), and even if we were on a beta period in which API changes are allowed, we had to start thinking about being very explicit there. Identify and document the API changes, and ideally provide upgrade paths and time for third-party developers to adapt their code.
The serious part
Next up in the lifecycle of the software is to actually ship a stable version. It is very hard to know exactly where to draw the line and stop the iterations on the beta product and make the jump — but eventually the time comes, and with it comes the age of maturity. Your software starts to be used by a large number of users and extended by a number of developers (depending on the scale of your market/community).
In terms of software features, this is generally a small step, the software doesn’t change so much for its stable release. But in terms of flows and dynamics of the project, this is a huge turning point.
The first challenge you’ll face here is dealing with backward compatibility for all the APIs you have been building so far. The backward compatibility strategy differs from one project to another, but in general this means that you’re engaged to maintain these APIs working properly for a long time. You can’t risk breaking the user/developer’s trust.
For an NPM library (or any other developer dependency), you have the luxury to rely on semantic versioning. Meaning, making breaking changes to the API is allowed if you make sure to update the version of your software accordingly. This communicates your intent to your users. This is made possible because updating a dependency is an explicit action that a developer takes on its development environment, and thus the developer can make sure their extension/product still works with the new version before pushing an update to the production environment. Nonetheless, library authors try to avoid breaking changes as much as they can or at least reduce their frequency. The React team for instance only releases an incompatible version every couple of years or so, and when they do they make sure to provide a simple upgrade path for their users.
For WordPress, the story is very different. While WordPress does make some small breaking changes from time to time, its goal is to do none. This is understandable because updating WordPress is an operation users perform on their production websites. These updates can also be done automatically without any manual intervention. And with the scale of WordPress (38% of the websites at the time of writing), it can’t afford to break millions of websites because of a change incompatible with third-party plugins.
So when the block editor landed as stable in WordPress 5.0, we knew we were making strong commitments towards supporting its APIs for a long time and this had a non-negligible impact on the development pace. It is very hard to quantify exactly but, for every pull request, a very careful review is required to check the impact on existing sites and APIs. This can also mean intentionally delaying features and enhancements until there’s a better moment/way to introduce the API changes with the minimum impact on existing sites and plugins. Spreading changes across different releases is a common strategy to help communicate changes and give time to third-party developers to adapt their code before actually making the required changes. As an example, it is no surprise that WordPress still uses jQuery 1, but it’s important to understand that a migration process throughout several major releases is underway.
While the impact of the backwards compatibility strategy on the development process and pace was something we anticipated properly, what came as a surprise to me was another consequence of the stable release: we put the software in the hands of millions of users while the size of the group that was referred to as the “experts” of the project remained unchanged. This led to a very high influx of notifications, direct messages, mentions on issues, pull request reviews, requests to discuss technical discussions and feature proposals. We became the bottleneck.
I’ve been reading a little bit about the subject (I strongly recommend these two posts: The city guide to open source by Devon Zuegel and (Open) source of anxiety by Henry Zhu) and this seems to be a common problem in successful open source projects: the people most equipped to move the project forward by undertaking big changes and improvements are the less likely to have time to actually make these changes.
What’s next
This is the current challenge we’re being faced with. How to make sure we help the community as much as we can and move the big upcoming projects forward: full site editing, widgets screen, navigation block, global styles — just to name a few.
WordPress has an amazing community. More contributors are embracing the vision and gaining expertise. I’m confident that, with the participation of all, we’ll make it happen and climb another step in our journey to democratize publishing.
For WordCamp Europe Online Contributor Day, I’ve prepared a post to onboard new contributors. I’ve used notion.so for this. It’s great software, but a friend of mine rightfully commented about it being a missed opportunity to use Gutenberg.
What if you could
open your browser,
type a URL,
and immediately start typing in Gutenberg.
and when you’re ready to share your content, click a button and send the sharing link to your collaborators.
the content will be encrypted,
only your collaborators will be able to read it,
even the web application’s server can’t decrypt it
then, you can work on the content live with your collaborators and potentially persist it to the cloud once done.
This is exactly what my new side project is about. Try it for yourself: https://asblocks.com.
Supported features
It’s still a young project but already packed with features
End-2-end Encryption.
Live collaboration/editing.
Read-only link.
Dark Mode.
Supports almost 30 Gutenberg blocks.
Cloud persistence.
Next features on the roadmap include:
Comments.
Live Chat.
Selection/Caret indicators.
Document outline, counts.
Local save button.
Local storage persistence.
Notes
The live collaboration conflict may contain some small bugs for the moment.
This is inspired by excalidraw (Similar idea applied to diagrams).
And WordPress
As you might already know, one of the next phases of the WordPress Gutenberg project is to bring collaborative editing to Core. AsBlocks is an important step in our journey to understand live-collaboration and bring it to Core. A WordPress plugin based on AsBlocks’s technology is also on the radar.
Open-source
Last but not least, it’s GPL same as WordPress and you can help shape the project on the Github repository.
You might not know yet but WordPress is working on a project called Full Site Editing with the goal of allowing users to edit any part of their site in a single and coherent way using the block editor.
The project is based on a new kind of themes called “block-based themes”. If you want to learn more about the project and these themes, I’d encourage to check out the following links:
Full Site Editing and block-based themes are still very experimental, and since I’m actively working on the project, I decided that the best way to test the work we’re doing is to use a block-based theme on my own blog (the experiments is already successful as I’ve managed to discover some bugs).
I’ve now switched the theme of my site to use the TwentyNineteen theme being developed on the theme experiments repository.
I’m not going to lie, don’t do that unless you feel adventurous. The project is still heavily being iterated on. It is lacking a lot of fundamental blocks and UX interactions are not polished. That said, I was very pleased to be able to just open the Site Editor page and have a representation close to the frontend where I tweaked some parts of the footer/header without having to dive into several menus, widgets, customizer and some settings page. I look forward to being able to hide these pages entirely from my admin as I don’t need them anymore.
This post presents different performance improvement and monitoring techniques that can be used in any React/Redux application.
Akin to the React Concurrent mode, it also introduces an async mode for Redux applications where the UI can’t be windowized.
WordPress 5.0 included a new block-based content editor. The editor is built as typical react/redux web application with a global store and a tree of UI components retrieving data using state selectors and performing mutations using actions.
Note To be more precise, The WordPress block Editor (called Gutenberg sometimes) uses multiple stores, but for the purpose of this post, we can simplify and assume it uses a single one.
Relying on the shoulders of giants: react-redux
The main performance bottleneck for most React/Redux applications is the fact that any change in the global state can potentially trigger updates in all the components subscribe to the store updates.
Fortunately, the simple fact of using react-redux is enough to solve most of these performance issues. The library is highly-optimized out of the box.
In the example above, each time the global state is changed, the mapStateToProps function is executed to compute the updated props passed to the underlying UI component.
By default if the computed props (block in our example) don’t change, the underlying component (Block in the example) is not re-rendered.
It’s important to note that react-redux‘s connect function performs a shallow comparison to check if the computed props changed or not. This means generating new object instances in mapStateToProps should be avoided and selectors (getBlock in our instance) should ensure that it returns the same block object instance on each call unless an actual change to the block object has been made.
// Bad: a new block object is generated on each render, causing rerenders even if the block name didn't change.
const mapStateToProps = state => ( {
block: { name: getBlockName( state ) }
} );
const MyBlockComponent = connect( mapStateToProps )( BlockComponent )
// Bad: Doing the same thing in a factorized selector is bad as well. It is strictly equivalent.
const getBlock = ( state ) => ( { name: getBlockName( state ) } );
Track component re-rendering
The first thing you should track when you notice performance degradations is whether you have components being re-rendered too often and without any meaningful prop change.
To do so, install the React Developer Tools browser extension, check the Highlight Updates option and notice the flashing borders around all the components being re-rendered. You can also inspect a given component and check which props are changing when it’s re-rendered.
Proxifying event handlers
Often, when react-redux‘s connect function, you end up providing event handlers that depend on props. For components optimized for purity (don’t render when props change), this can lead to unwanted rerenders because the event handler end-up being recreated on each render.
To address this issue @wordpress/data implemented its withDispatch higher-order component (equivalent to connect) with the idea that we only care about these event handlers when the event happens (click on a button…), so instead of recreating the event handler on each render, withDispatch provides proxies to the actual event handlers, these proxies instances don’t change per render and evaluate the actual event handlers when they get called. The assumption here is that the event handlers list won’t change depending on the component’s props.
Note that the data module offers now useSelect and useDispatch React hooks requiring a different technique to optimize event handlers that needs data dependencies.
Optimize the selectors
Now, that we ensured that our components re-render only if necessary (one of the props changed), we started monitoring our application to find the bottleneck.
When building an editor, one of the most important interactions you’d want to optimize for is “typing”. When quickly typing in the editor, the user shouldn’t notice slowness, the feedback (character being printed) should be immediate. Using the Chrome Performance Tools, we started monitoring the keypress event duration.
Keypress event monitoring
Quickly, we realized that the more content the editor is showing, the more rendered components we have, the worse the typing performance gets. And even if the components were memoized, their selectors were still being called on each change even if their result didn’t change. Selector calls quickly became the bottleneck of the editor’s performance. Our next step was to optimize the performance of our selectors.
The most important technique to be aware of here is what we call function memoization. Memoizing a function means that a function is not executed twice unless its inputs (arguments) change.
In the React/Redux world, there are a number of libraries allowing you to memoize selectors, some of the most used one being reselect and rememo.
Note Memoization is a good technique but it’s important to monitor and measure the performance improvements. Start by memoizing the less-performant selectors. Memoization is also a technique that can be used to avoid creating new objects/array instances if the inputs are the same (which then prevents components from re-rendering if not necessary).
Reshape the state tree to avoid high selector cache invalidation rates
In a typical Redux store, you’ll have some data that changes very often and other state values that don’t. It is important that these two things stay separate in the Redux state tree for better selector performance.
Let’s take the following blocks redux state as an example:
If we want to optimize the selector to avoid computing a new array if the state stays the same, we’d write something like:
const getBlockIds = createSelector(
state => state.blocks.map(block => block.id),
state => [ state.blocks ]
);
The second argument here tells the selector to avoid recomputing the array if the state.blocks value didn’t change.
That’s a good first step, the problem though is that we don’t reorder or add new blocks as often as we change the block attributes, the selector value won’t change, but the whole “blocks” state will causing the selector to recompute again.
This issue is solved by identifying what are the parts of the state that change often, and the ones that change less. Ideally, we should group all state values that change “together” under the same state key.
Here’s an example of a rewrite that can lead to better performance:
You’ll notice that now the array returned by getBlockIds won’t change unless the order or the list of blocks is actually changed. An update to the attributes of blocks won’t refresh the value returned by that selector.
Async mode
Memoizing slow selectors did have an impact on the performance but overall, the high-number of function calls (selector calls) was still an issue even if a single function call is very fast. It became apparent that instead of optimizing the selectors themselves, our best bet would be to avoid calling the selectors entirely.
This is a typical performance issue in React-Redux applications and the approach that most people take to solve is using windowing techniques. Windowing means that if a component is not currently visible on the screen, why do we need to render it to the DOM and keep it up to date. react-window is one of the most used libraries when it comes to implementing windowing in React applications.
Unfortunately, windowing is not an option we can consider for the block-editor for multiple reasons:
In general, windowing works by computing the height of the hidden elements and adapting the scrolling behavior to the computed height even if the elements are not rendered on the DOM. In the case of the block editor, it’s actually impossible to know or compute the height of the blocks and their position without really rendering them to the DOM.
Another downside is the A11y support, screen readers tend to scan the whole DOM to provide alternative ways to navigate the page without relying on a notion of “visible elements” or not. Something that is not rendered on the DOM, is something you can’t navigate to.
For these reasons, we had to be a bit more innovative here. While the initial rendering of the components had a cost, the most important thing for us is to keep the UI responsive as we type, and the bottleneck at this point was the number of selectors being called.
That said, in a typical block editor, when you’re editing a given block, it is very rare that an update to that block affects other parts of the content. Starting from this hypothesis, we implemented the Async Mode.
What is the Data Module’s async mode?
The Async mode is the idea that you can decide whether to refresh/rerender a part of the React component tree synchronously or asynchronously.
Rendering asynchronously in this context means that if a change is triggered in the global state (Redux store), the subscribers (selectors) are not called synchronously, instead, we wait for the browser to be idle and perform the updates to React Tree.
It is very similar to the Concurrent mode proposed by the React Team in the last React versions. The difference is that React’s concurrent mode use setState calls to defer the rendering but in our case, we want to defer the selector calls which in the call chain happen before the React setState calls.
How did we apply the async mode to the editor?
Our approach was to consider the currently selected block as a synchronous React component tree, while considering all the remaining blocks as asynchronous.
It is possible to opt-out of Async mode for a given block. This can be useful for blocks that rely on other blocks to render properly. (E.g. Table of content block)
At this point, our biggest performance issues were solved, but still, we wanted to continue improving the editor’s performance as much as we can.
What’s next
Building a performant editor is a challenge, an on-going one, and unlike regular features, performance is not something you can implement and forget about, it’ts a continuous workflow. If you have to take something out of my lenghty blog post, I hope it’s this:
Identify the flows you want to optimize,
Identify the bottlenecks of your application,
Measure constantly, ideally in an automatic way on each PR,
Don’t learn and apply optimization techniques blindly, instead, read about them, know their existence and try to adopt the best techniques based on your particular use-cases.
The WordPress block editor is based around the idea that you can combine independent blocks together to write your post or build your page. Blocks can also use and interact with each other. This makes it very modular and flexible.
But the Block Editor does not embrace modularity for its behavior and output only, it is also built from the ground up as several reusable and independent modules or packages, that, combined together lead to the application and interface we all now. These modules are known as WordPress packages and are published and updated regularly on the npm package repository.
Right now these packages are built in the Block Editor Repository and used to power the block editor. Ultimately, these packages will be used to power any page in the WordPress Admin.
Modular architecture
Using a modular architecture has several benefits for all the actors involved:
Each package is an independent unit and has a well defined public API that is used to interact with other packages and third-party code. This makes it easier for Core Contributors to reason about the codebase. They can focus on a single package at a time, understand it and make updates while knowing exactly how these changes could impact all the other parts relying on the given package.
A module approach is also beneficial to the end-user. It allows to selectively load scripts on different WordPress Admin pages while keeping the bundle size contained. For instance, if we use the components package to power our plugin’s settings page, there’s no need to load the block-editor package on that page.
This architecture also allows third-party developers to reuse these packages inside and outside the WordPress context by using these packages as npm or WordPress script dependencies.
Types of packages
Almost everything in the Gutenberg repository is built into a package. We can split the packages into two different types:
Production packages
These are the packages that ship in WordPress itself as JavaScript scripts. These constitute the actual production code that runs on your browsers. As an example, there’s a components package serving as a reusable set of React components used to prototype and build interfaces quickly. There’s also an api-fetch package that can be used to call WordPress Rest APIs.
Third-party developers can use these production packages in two different ways:
If you’re building a JavaScript application, website, page that runs outside of the context of WordPress, you can consume these packages like any other JavaScript package in the npm registry.
npm install @wordpress/components
import { Button } from '@wordpress/components';
function MyApp() {
return (
<Button>Nice looking button</Button>
);
}
If you’re building a plugin that runs on WordPress, you’d probably prefer consuming the package that ships with WordPress itself. This allows multiple plugins to reuse the same packages and avoid code duplication. In WordPress, these packages are available as WordPress scripts with a handle following this format wp-package-name (e.g. wp-components). Once you add the script to your own WordPress plugin scripts dependencies, the package will be available on the wp global variable.
// myplugin.php
// Exemple of script registration dependending on the "components" and "element packages.
wp_register_script( 'myscript', 'pathtomyscript.js', array ('wp-components', "wp-element" ) );
// Using the package in your scripts
const { Button } = wp.components;
function MyApp() {
return (
<Button>Nice looking button</Button>
);
}
Some production packages provide stylesheets to function properly.
If you’re using the package as an npm dependency, the stylesheets will be available on the build-style folder of the package. Make sure to load this style file on your application.
If you’re working in the context of WordPress, you’ll have to enqueue these stylesheets or add them to your stylesheets dependencies. The stylesheet handles are the same as the script handles.
In the context of existing WordPress pages, if you omit to define the scripts or styles dependencies properly, your plugin might still work properly if these scripts and styles are already loaded there by WordPress or by other plugins, but it’s highly recommended to define all your dependencies exhaustively if you want to avoid potential breakage in future versions.
Packages with data stores
Some WordPress production packages define data stores to handle their state. These stores can also be used by third-party plugins and themes to retrieve data and to manipulate it (Refer to my previous post on the subject for more details here). The name of these data stores is also normalized following this format core/package-name (E.g. the @wordpress/block-editor package defines and uses the core/block-editor package).
If you’re using one of these stores to access and manipulate WordPress data in your plugins, don’t forget to add the corresponding WordPress script to your own script dependencies for your plugin to work properly. (For instance, if you’re retrieving data from the core/block-editor store, you should add the wp-block-editor package to your script dependencies like shown above).
Development packages
These are packages used in development mode to help developers with daily tasks to develop, build and ship JavaScript applications, WordPress plugins and themes. They include tools for linting your codebase, building it, testing it…
Going further
The WordPress packages are a gold mine and a huge time saver when it comes to building JavaScript application whether as a WordPress plugin or an independent application.
In the next weeks, I’ll try to write a series of posts to explain some of these packages in detail, how to reuse them in your own code. In the meantime, I encourage you to explore the documentation of the different packages on the block editor handbook.
With Gutenberg, we made the choice to use JavaScript heavily in order build the UI of the editor, not because we’re nerdy hipsters but essentially because it is the perfect fit to address the UI and UX challenges of a heavily interactive interface in the browser.
As a consequence, we’ll start to see a shift in the WordPress community. Plugin developers are required to use JavaScript more in order to extend the editor. Most blocks need to be developed using this technology. The modules Gutenberg provides (Components, data module, i18n, apiFetch…) will also encourage developers to extend other parts of WP-Admin in JavaScript. Instead of writing HTML/CSS screens from scratch and rendering them from the server, developers are able to bootstrap and prototype fully accessible new screens in WP-Admin by composing these components in a small number lines of code.
Learning process
But when we talk about JavaScript in general, we can think of two different approaches:
Using untranspiled ES5 code.
Leveraging build tools like babel, webpack… and write ESNext JavaScript code (ES2015 and more).
Most plugin authors are already writing ES5 code in their PHP rendered UIs or in their themes and the Gutenberg APIs can continue be used this way. But our role as Core Contributors should also be to educate people to get on the ESNext train as it can be a huge improvement in terms of productivity and development experience.
That said, learning ESNext and all the tools involved in the process can be a bit overwhelming if you’re coming from a PHP background without any prior heavy JavaScript experience. To help with the transition, the WordPress community started working and providing tools (Like Create Guten Block).
Using these tools can feel like “magic” though. It works but as a developer, you don’t really know what’s happening behind the scenes. Debugging code can be a challenge if you don’t understand how everything fits together.
For this particular reason, I created the WordPress JavaScript Plugin Starter. Unlike other starters thought, it’s written as a tutorial. Each commit of the repository is a step further in the setup and the README goes through each one of these steps and explains it.
Hopefully, At the end of the day, you’d be able to use the starter as a way to start a WordPress plugin but also master how this plugin works and how all the tools fit together to ship a production ready JavaScript plugins.
Manipulating Client Data in highly interactive screens in WordPress (essentially the editor page in WordPress pre 5.0.0) is not the easiest task to do for plugin authors. As much as WordPress provides a large number of functions to access and manipulate data server side, it fails at providing a consistent way to access and manipulate data on the client.
And by client side data, I refer to things like:
What’s the current post content in the editor? How can I update it programmatically?
What’s the currently selected tags/categories?
Is a given metabox visible or not?
How can I be notified if an image block is inserted into the editor of if the user switches the post status?
Client-data is also about accessing the WordPress REST API Data client-side:
How do I retrieve the latest posts?
How do I retrieve the current user’s object?
How can I create a new Category?
And it can also encompass plugins data. For example:
How do I store/update and access my own plugins client-side data?
How do I access and update the content of the Yoast metabox?
How do I retrieve the content of ACF metaboxes in a custom ACF plugin?
Often, to address these use-cases, we had to manipulate the DOM directly: Retrieve input’s values directly from the DOM or subscribe to DOM events to ensure we’re notified properly. This often leads to hacky code and code that breaks on WordPress/plugin updates because DOM access is not an official extensibility API.
Gutenberg JavaScript-heavy World
With Gutenberg’s release coming and as we add more and more JavaScript-powered UI to WordPress pages, the need for a consistent solution to manage client-side data is urgent, we can’t rely on DOM access anymore. To address this, Gutenberg introduces the Data Module.
Retrieving Data
One of the first use-cases of the Data module is to retrieve data defined by WordPress. To do so, you need to specify the namespace of the data to call a selector on this namespace.
Here’s how we retrieve the content of the post being edited in a Gutenberg Page.
var content = wp.data.select( 'core/editor' ).getEditedPostAttribute( 'content' );
Notice that the data module API is available in the wp.data global variable. The namespace to access the editor’s data is core/editor. And in each namespace, you can use a list of selectors.
A selector is a simple JavaScript function used to retrieve client-side data. In the example above, the selector being called is getEditedPostAttribute. This selector accepts an argument corresponding to the post attribute to retrieve. If we need to retrieve the title of the post instead of its content, we can do:
var title = wp.data.select( 'core/editor' ).getEditedPostAttribute( 'title' );
Protip: The list of the available selectors/namespaces is not documented properly on WordPress/Gutenberg yet. To see the full list of the core/editor‘s selectors, you can check the selectors file here.
You can also type wp.data.select( 'core/editor' ) in your browser’s console in any Gutenberg page to inspect the full list of the selectors available in this namespace.
Updating Data
So, selectors allow us to retrieve data and similarly, each namespace can defined actions to manipulate data (create/update/remove). For example, to update the title of the post being edited in Gutenberg, you can do:
wp.data.dispatch( 'core/editor' ).editPost( { title: 'My New Title' } );
An action is a function called to update the client-data defined in a given namespace.
Protip: To see the full list of the actions defined by the core/editor namespace, you can check the actions file here. You can also type wp.data.dispatch( 'core/editor' ) in your browser’s console in any Gutenberg page to inspect the full list of the actions available in this namespace.
Register a custom “namespace”
Plugins can also manage their client state using the data module. To achieve this, they need to register their selectors/actions in addition to a reducer to hold and update the state.
A reducer is a function describing the shape of the initial state and how the state value evolves in response to dispatched actions.
Protip: The Data module is built on-top of the redux library. Learning redux is not required to use it but taking a look at the Redux Docs and its glossary should help you master the data module.
As an example, Let’s register a custom store to keep track of a list of todo items:
// This is the reducer
function reducer( state = [], action ) {
if ( action.type === 'ADD_TODO' ) {
return state.concat( [ action.todo ] );
}
return state;
}
// These are some selectors
function getTodos( state ) {
return state;
}
function countTodos( state ) {
return state.length;
}
// These are the actions
function addTodo( text, done = false ) {
return {
type: 'ADD_TODO',
todo: { text: text, done: done };
};
}
// Now let's register our custom namespace
var myNamespace = 'my-todos-plugin';
wp.data.registerStore( myNamespace, {
reducer: reducer,
selectors: { getTodos: getTodos, countTodos: countTodos },
actions: { addTodo: addTodo }
} );
Now that the custom namespace is registered, we can consume this in our code, the same way we did for the core/editor’s store.
// Add a new todo item
wp.data.dispatch( 'my-todos-plugin' ).addTodo( 'Finish writing a blog post about the data module', false );
// Retrieve the list of todos
var countTodos = wp.data.select( 'my-todos-plugin' ).countTodos();
React to changes in the state
Another important use-case for WordPress Plugins often is to react to changes happening in Core Data or other plugins. To address this, the Data Module provides a subscribe function. This function allows registering listeners that get called each time the state is changed.
In the following example, we trigger some random behavior each time a new block is added to the editor:
var currentCount = wp.data.select( 'core/editor' ).getBlockCount();
wp.data.subscribe( function() {
var newCount = wp.data.select( 'core/editor' ).getBlockCount();
var hasNewBlocks = newCount > currentCount;
currentCount = newCount;
if ( hasNewBlocks ) {
// A new block has been added, do something
console.log( 'The new block count is :' + newCount );
}
} );
Declarative data needs
In addition to a new way to handle data client-side, as WordPress and plugins move towards more JavaScript UIs, the use of React and its wordpress abstraction wp.element is growing.
The WordPress element module allow you to describe UI using functions (or components) taking props and returning an HTML representation of the component.
A simple component displaying an h1 can be written like so:
// You can use JSX instead of wp.element.createElement
var el = wp.element.createElement;
function Title( props ) {
return el( 'h1', {}, props.title );
}
Very often, these UI components need data to work properly. In the example above, the Title component expects a title prop to display its content in an h1.
Let’s say we’d like to display title component using the title of the post being edited in Gutenberg, we can use the select function explained previously and do something like that:
But this approach has some downsides. The value of the title prop can change over time and we’d like to refresh our component once that happens. This means we need to use wp.data.subscribe and rerender each time the title changes.
Fortunately, to avoid having to write this logic for ourselves, The data module provides what in the React community is referred to as a Higher-order component. A Higher-order component is a function that wraps another component and feeds it with props.
Here’s how you provide the title as a prop using the data module:
Now when we render the EditorPostTitle component, it will be automatically refreshed each time the title value changes.
el( EditorPostTitle );
Similarly to how the withSelect Higher-order component provides props to components using selectors, the data module also provides a withDispatch Higher-order component to feed components with actions.
Let’s write an input that can be used to change the value of the Gutenberg’s post title.
Notice in this example how we’re composing both withSelect to retrieve the current title and withDispatch to provide a onChangeTitle callback the input Component can call to perform the updates.
Protip: Notice the compose function being used in the example, it’s an utility function that allow combining several Higher-order components together.
Until now, we were mostly dealing with synchronous selectors and actions but WordPress Data offers a way to handle asynchronous data as well.
To do so, we can attach a side-effect to each selector: A function that gets executed the first time a selector is called with a given set of arguments. This function can be responsible of performing an async API request and updating the state once the request succeeds. These side-effect functions are called resolvers.
For instance, we might be interested to fetch posts list to display them in a PostList component.
var el = wp.element.createElement;
// This a UI component showing a list of post titles given a posts prop.
function PostList( props ) {
return props.posts.map( function( post ) {
return el( 'div', { key: post.id }, post.title.rendered );
} );
}
var RecentPostList = wp.data.withSelect( function( select ) {
posts: select( 'core' ).getEntityRecords( 'postType', 'post' )
} );
Notice that in this example, the APIs being used are the same we were using before, but behind the scenes, a REST API is being executed the first time the getEntityRecords selector is called, to fetch the post list.
Here’s the exact flow:
The first time the getEntityRecords selector is called, the post list is empty. So the PostList component is rendered with an empty list of posts.
But behind the scenes the core data module also calls the resolver called getEntityRecords to perform the API Request for you.
When the API Request resolves, the state is updated with the received posts. This will automatically retrigger a rerender, thanks to the way withSelect works.
The PostList gets rerendered with the update list of posts.
Protip: If you’re interested in implementing this kind of side effects in your own plugins, make sure to take a look at the resolvers in the Data module docs.
WordPress Headless / Agnostic?
The Data Module is a generic JavaScript package to handle data, if you want to use in a WordPress client, or if you want to use any applications not related with WordPress, that’s also possible. Instead of using the wp.data global, you can just fetch the package from npm:
npm install @wordpress/data --save
Going further
The Data Module is still being enhanced, if you’re Interested in learning more about the Data Module or any other modules, make sure to join the #core-js channel of the WordPress Core slack and the weekly meetings happening each Tuesday at 13:00 UTC.