With the release of ChatGPT, everyone in the tech industry – from CEOs to managers and engineers – is wondering how these new tools and technologies can and will disrupt their lives. Will it make their jobs obsolete, or will it increase productivity? Should we fear it, or embrace and benefit from it?
Developers often think about how AI can help them do their jobs more efficiently. Tools like Copilot, Ghostwriter, and ChatGPT are being used more and more as coding assistants. One might ask how much further they can go and which developer duties they can absorb. As a result, some could say that developers’ added value will decrease, but if you’re more optimistic, you might think that it will free developers from boring and repetitive tasks, allowing them to focus on more important/difficult tasks. The reality in the mid term is that it is probably going to be a bit both. We’ll be needing less developers but the ones left are going to be using AI heavily to get the job done.
Content creators are also heavily impacted, and several actors in the field are racing to see how much AI they can integrate into their software to speed up and improve the content creation process. Notion with its AI assistant, Shopify with Shopify Magic, and Automattic with Jetpack AI are all experimenting, but we’re still very early in the game, and no one really knows what will or won’t work.
That said, when a disruptive technology is developed, often, we need to think outside the box to really understand the impact. It’s not enough for a developer to try to apply the technology to their day-to-day workflows and habits. It’s not enough for CMSs to be asking themselves about how to best use AI to improve the creative process. One has to ask themselves about the real purpose of their work on human life and consider its broader impact on social, ethical, and cultural implications..
My job today is to build software used for content creation and building websites. But why do we build websites in the first place? It’s not a goal in itself. The real goal is to provide information to users in the most efficient and organized way. A user goes to a website to accomplish tasks such as booking a reservation, finding information about a product, or learning about a service. So, the question I have been asking myself is whether AI can allow us to help users with these tasks in the most effective and easy manner for the user.
Are websites still the right tool for that purpose?
Why would I open a browser if I can just talk to a bot and get an instant customized reply? Yes, people have been talking about bot-first UIs for some time now, but bots have always felt inefficient. For example, I can’t ask Siri about the price of a facelift surgery in my hometown plastic surgery clinic and get an instant reply. Siri is both limited technically and only understands a very limited part of the natural languages. The clinic had to build a website for me to get all the information I need and Siri is just going to point you to that website. But ChatGPT doesn’t have these boundaries. What if there was a way for the clinic to feed all this information into a GPT-powered bot that anyone can call and get all the up-to-date information? What if that way of feeding an AI was an easy-to-access standard that any small business could use? What’s the place of websites in such a world?
Another question worth asking is: who provides that standard, and where will the data be hosted for the AI to consume? Given the costs to train and run large scale AI, the most logical answer is that BigTech (Google, Microsoft, Facebook) have a certain advantage there. They also already have access to a huge data set of all the business and information. How much people are going to trust these companies with all their data?
In conclusion, this post asks more questions than it answers, the truth is that no one really knows the actual answers at this point. All we know is that the disruption rate is increasing exponentially over time. Change is coming to us sooner that one might expect. A world without websites dominated by big AI models from BigTech sounds plausible, it’s not very compelling but it also seems inevitable. My conviction is that websites need to evolve, there might a be place for them in that world, but it becomes more niche driven by nostalgia and self-esteem. Websites will enter the museums of the future.
If you’ve been following my blog or WordPress development for some time now, you’d know by now that the Gutenberg project contains a number of hidden gems that you, as a JavaScript developer or a WordPress developer can benefit from in your own standalone applications or WordPress plugins.
One of these hidden gems is the keyboard shortcuts package, a utility package that allows you to do all sort of things related to keyboard shortcuts, between adding keyboard shortcuts, removing keyboard shortcuts, updating them and more.
As a standalone package.
Like any package in the gutenberg package, it’s distributed as a standalone npm package that you can consume and use in any React application. Here’s a basic example (Full example available on Stackblitz):
So to use the package, first, you need to wrap your React application in a ShortcutProvider. (The shortcuts will only work if the active element is inside the shortcut provider in the DOM tree)
The next step is to register your shortcuts using the registerShortcut action. This function call declares the existence of your shortcut in your shortcuts store, providing the keyboard combination used to trigger the shortcut behavior and some additional metadata, like a description and a category.
It’s a good practice to register all your shortcuts when initially rendering your application component.
That said, for the shortcuts to actually perform any action, we need to define their callbacks, and we do so using the useShortcut react hook, the first argument being the shortcut name and the second the function that is called when the key combination is pressed.
Note: The reason registering shortcuts and using shortcuts are separate is because some shortcuts are contextual to specific actions but the application always need to know about the existence of a shortcut during its lifecycle. For instance, all shortcuts can be rendered in a help panel but all the shortcuts might not be active. For example, a “copy” shortcut in an editor is only active if there’s a selection.
The keyboard shortcuts package allows using single character shortcuts or shortcuts with one or two modifiers. These are all examples of valid key combinations:
{ character: 'a' } is equivalent to pressing a.
{ character: 'del' } is equivalent to pressing the delete button.
{ character: 'a', modifier: 'primary' } is equivalent to pressing command + a on Mac.
{ character: 'a', modifier: 'primaryShift' } is equivalent to pressing command + shift + a on Mac.
{ character: 'a', modifier: 'alt' } is equivalent to pressing option + a on Mac.
As a WordPress API
In WordPress, the keyboard shortcuts package is used to register all the block editor shortcuts, but you can also use it to register your own custom shortcuts in your block or any WordPress plugin or page using the same API. It is available in the wp-keyboard-shorctuts wordpress script. If you plan on using it, in your plugin’s scripts, make sure to add the wp-keyboard-shorctuts as a dependency to your custom scripts:
The wp.keyboardShortcuts global variable will be made available for you with all the APIs exposed by the package: useShortcut, ShortcutsProvider…
Super Powers
Editing keyboard shortcuts
An important aspect of implementing keyboard shortcuts in any application is to define the right key combinations for the right shortcut. That said, it is surprisingly hard to come up with combinations that work for everyone, in all browsers and all operating systems. For that reason, the keyboard shortcuts package allows the possibility to update the registered shortcuts and change their keyboard combination.
import { useSelect, useDispatch } from '@wordpress/data';
import { store } from '@wordpress/keyboard-shortcuts';
function ToggleIncrementShortcut() {
const { getShortcutKeyCombination } = useSelect( store );
const { registerShortcut } = useDispatch( store );
const toggleShortcut = () => {
const currentCharacter = getShortcutKeyCombination( 'mycounter/increment' )?.character;
const nextCharacter = currentCharacter === 'a' ? 'i' : 'a';
registerShortcut( {
name: 'mycounter/increment',
description: 'Increment the counter',
keyCombination: {
character: nextCharacter
}
} );
}
return (
<button onClick={ toggleShortcut }>
Toggle between "i" and "a"
</button>
);
}
So as we can see in the example above, registering the shortcut again, overrides its current definition allowing us to update any of its properties including the key combination.
💡 In the future, the block editor might provide this automatically but until then, there’s an opportunity here for a WordPress plugin that provides a UI to edit the existing shortcuts.
Building a performant editor is a very difficult task, it requires constant attention and monitoring to some key metrics. In the context of the WordPress block editor (aka Gutenberg), we constantly track the following key metrics:
Loading time: The time it takes from the moment the user clicks the “new/edit post” link until the editor loads the post to be edited and becomes responsive to user input.
Average Typing time: The time it takes for the browser to respond to characters being typed. This is one of the important metrics for an editor, this measure should ideally be very small, the user shouldn’t even notice any delay or lags.
We also track some secondary metrics that are specific to block editors including:
Block Selection time: In a block editor, everything is a block and the user constantly jumps from one block to another. With this metric, we track the time it takes for the browser to respond when selecting a new block.
Global Inserter Opening time: This tracks the time for the browser to respond when opening the global inserter (or the block library), showing the available blocks.
By constantly keeping track of these numbers while iterating on features and bug fixes for the editor, we managed to improve the performance of the editor drastically over time. In a previous post, I shared some of the techniques we used to make these leaps forward.
That said, one of the important aspects of WordPress and its block editor is their extensibility. The WordPress plugins repository contains thousands of plugins to super charge your WordPress Installation and editor. A typical WordPress install has at least a dozen of active plugins. And of course, plugins have costs: the editor needs to be performant intrinsically but also stay performant as you extend it.
Unfortunately, depending on the used plugins, this is not always the case.
Popular WordPress Plugins
The first report compares the metrics of 8 of the most popular WordPress Plugins on the repository in addition to Gutenberg itself (Gutenberg is always enabled in all the tests). Here’s the list of the tested plugins:
Gutenberg v11.3.0 RC 1
Akismet v4.1.10
Contact Form 7 v5.4.2
Elementor v3.3.1
Jetpack v10.0
Really Simple SSL v5.0.8
WooCommerce v5.5.2
WPForms Lite v1.6.8.1
Yoast v16.9
Results
Plugins
Loading time
Average Typing time
Gutenberg
4318ms
45.13ms
Akismet
-0.57%
+4.25%
Contact Form 7
+4.15%
+3.92%
Elementor
+10.31%
+3.51%
Jetpack
+19.48%
-22.42%
Really Simple SSL
-0.65%
+1.84%
WooCommerce
+16.05°%
+6.51%
WPForms Lite
+5.52%
+20.05%
Yoast
+25.29%
+3.17%
Observations
Here are some of my own take-aways from the numbers above.
Most of the popular plugins have no impact or a reasonable impact on the loading and typing times of the editor.
WooCommerce, Yoast and Jetpack have a noticeable impact on the loading time.
Surprisingly, the Typing metric is faster when using Jetpack.
Analysis
I think the results above are good news for WordPress. Most popular plugins don’t have a big impact on the editor’s performance.
The plugins that deal the most with the editor: adding blocks, adding meta boxes or sidebars or extending the editor via slots… are the ones that are impacting the loading time a bit. They might be using extra JavaScript and Stylesheets in the editor.
Note: I tried including WordFence Security plugin in my benchmark, unfortunately, by default that plugins had a very big impact on the loading time which triggers timeout errors when running the performance tests for it. I was not able to gather the numbers for that particular plugin.
Block Editor Plugins
The second report compares the editor metrics for 10 of the most popular plugins that target the block editor specifically, whether it’s block library plugins, or plugins enhancing the editor with tools and customization options. The list of the compared plugins is the following:
Gutenberg v11.1.0 only
CoBlocks v2.16.0
Editor Plus by Extendify v2.8.2
EditorsKit v1.31.5
Getwid v1.7.4
Gutenberg Blocks and Template Library by Otter v1.6.9
Kadence Blocks v2.1.7
Redux v4.2.14
Stackable v2.17.5
Starter Templates v2.6.21
Ultimate Addons for Gutenberg v1.24.0
Results
Plugins
Loading Time
Typing Time
Block Selection Time
Inserter Opening Time
Gutenberg
4237ms
53.85ms
58.23ms
59.96ms
CoBlocks
+255.11%
-12.44%
+65.88%
+14.29%
Editor Plus
+1064.99%
-21.15%
+171.70%
+404.59%
EditorKit
+66.21%
+77.20%
+17.29%
+52.94%
Getwid
+3.75%
+2.21%
+9.07%
+1.08%
Otter
+36.30%
+22.28%
+12.83%
-1.13%
Kadence Blocks
+4.58%
+4.46%
+14.03%
-2.75%
Redux
+73.93%
+104.64%
+16.33%
+14.76%
Stackable
+19.58%
+10.06%
+6.80%
+20.60%
Starter Templates
+6.84%
+9.68%
+9.87%
+5.90%
Ultimate Addons
+13.73%
-20.43%
+9.24%
+15.23%
Observations
Here are some my own take-aways from the numbers above:
No surprise that these plugins have a more visible impact on the numbers since they specifically target and extend the block editor.
The loading time is not consistently impacted by the block library plugins, some are doing better than others.
Editor Plus and EditorKit impact all editor metrics significantly.
Analysis
Editor Plus and EditorKit are plugins that add customization capabilities to the block editor in very different ways, they impact core blocks, add blocks and add tools to interact with the editor. Based on my experience, adding this kind of built-in features to the editor can quickly have a broad impact on performance because it can impact all rendered blocks. These are very valuable plugins, but I do think they’d require more care than typical plugins when it comes to performance. Tracking editor metrics for this kind of plugins is key.
Lazy-loading editor assets (JS/CSS) is something we want to explore ultimately in the editor to keep the bundle size and loading time contained, but the loading time numbers here suggest that it’s not a fundamental issue in the block editor itself, since some block libraries do add a number of blocks (assets) without a meaningful or a big impact on the loading time. Plugins like CoBlocks, Redux or EditorPlus might be up for some quick wins there.
In a previous version of this benchmark, I was noticing that most block libraries had a significant impact on the inserter opening metric. This led to some improvements to Gutenberg Core itself. Inserter items are lazy-rendered now meaning adding more and more items doesn’t impact the numbers as much as the numbers above confirm.
Note: A frontend metric would be a great addition to the key metrics to monitor for block library plugins, it’s often more important than editor-related metrics.
Methodology of the test
The tests were run sequentially on the same idle computer using @wordpress/env and the Gutenberg e2e performance job.
For each plugin, I didn’t configure it or enable/disable features, I just went with the default settings considering that most users are going to use the default settings and that they should be performant by default.
The editor is loaded and used with a particularly sizeable post (~36,000 words, ~1,000 blocks).
Of course, this is not a scientific method but based on my previous experience with these metrics, the numbers are meaningful with a margin of error of 5% to 10% to account for the randomness of CPU usage/timing of the test.
One of my main motivations for this post was to highlight these issues more and encourage plugin authors to monitor the performance impact of their code. Performance should be considered a first-class feature. Of course the key metrics for each plugin might differ but a good first step is to run the Gutenberg metrics with or without your plugins and compare the results.
Here’s how you can do it on your own:
First, clone the Gutenberg repository and build it:
git clone git@github.com:WordPress/gutenberg.git
cd gutenberg
nom install
npm run build
The next step is to run the WordPress + Gutenberg environment. We can just use the Gutenberg’s built-in environment like so: (Docker Desktop is a requirement here)
npm run wp-env start
You should be able to access the testing environment on http://localhost:8889
Install your plugin on the environment above directly from WPAdmin and activate it.
You’re now ready to run the tests like so:
npm run test-performance packages/e2e-tests/specs/performance/post-editor.test.js
And that should be it, you can run the tests as often as you wish, try different variations of your plugin, disable it, compare to other plugins…
Some Hints
While working on performance improvements on the editor, we noticed some trends that can help you find the bottlenecks for your own plugins:
For the loading metric, consider checking the initial rendering of your components/UI, sometime deferring the initial rendering of non important UIs help.
The size of the loaded assets can also have an impact on the loading metric.
For the typing metric, consider checking your selectors (wp.data.useSelect, wp.data.withSelect, wp.data.subscribe calls). My previous post goes into more details here.
Chrome Performance Monitoring tools is a great way to detect and debug performance regressions. One approach I personally use often, is to record a trace for a given interaction (like type a single character, open the inserter, load the page or any interaction you want to debug) and compare the resulting trace with and without your plugin.
Notes
An initial version of this benchmark resulted in very different numbers from the ones we have today, I’ve reached out to some of the plugin authors and shared the numbers with them. I’d like to thank all of them, as they were all receptive. Some gains are already reflected in the new versions tested above and I know that the authors of the plugins above are working on more improvements in the upcoming weeks.
If you run the same benchmarks locally, you might get different numbers and that’s totally fine. The machine running the test have an impact there, for instance your docker instance maybe faster but your browser slower resulting in numbers and rates that are different. That said, comparing to your own Gutenberg numbers should be relatively stable compared to the numbers on this post.
Conclusions
I would like to finish by encouraging folks to care about performance daily on their development workflows. For Gutenberg Core Itself, this post highlighted for me some good additions to include to our performance pipeline and some areas worth debugging.
Let’s make WordPress and its plugins blazing (Everyone is using this word lately, I finally found a place for it 😀) fast.
Building a website these days is all about finding the right balance between a coherent and consistent design across the website and customizations capabilities that allows specific content to shine.
Far is the era where everything was customized manually (remember Dreamweaver and Frontpage?). CSS came to be, and different iterations on top of it, guidelines and frameworks exist today to ensure this consistency. Some developers still use Bootstrap, others use Tailwind, many build their own design systems. Design system, that’s a big word and a big trend these days, a promise of a coherent set of guidelines and components that can be used to ensure developers, designers and content creators are aligned and share the same expectations.
How does this translate in a CMS? How does this translate in WordPress, which runs more than 40% of the world’s websites? The answer has always been themes. While themes mean different things to different folks and have been used in different ways by different people, in their essence, they are what provides the consistency in the design of the website. They also define what content creators can or cannot customize since the degree of freedom granted to content creators might differ from one website to another depending on the context.
While WordPress continues to push its block editor and starts introducing new systems like Full Site Editing, themes live on and will remain the main entry point to define the design system and the shared guidelines for content creators.
How does this translate in a block world?
Initially, the block editor just embraced the classic WordPress APIs and approach. This meant that in order for a theme to define shared guidelines and settings, it has to rely on a set of available theme support flags. And for the design language and styles, the usage of CSS to override the default CSS styles provided in blocks is needed.
Quickly the limitations of this approach appeared:
The block editor has a lot more customization capabilities compared to the classic editor by default, theme support flags do not scale properly and do not provide the flexibility required to control these capabilities properly (per block, per context,…)
Blocks come with built-in CSS, and overriding the CSS to match the theme’s flavor is no easy task given the number of variations the blocks can have.
This is where Global Styles and Global Settings come in (we also talk about theme.json config to refer to these two APIs). What are these new concepts and how do they affect block and theme authors?
Theme authors
So in order to allow theme authors to provide these shared settings, WordPress and the Gutenberg project introduce the theme.json file. It’s a file that lives at the root of the theme folder and defines two important keys: settings and styles.
Settings
The settings are a list of global or contextual configuration values that define how the editor and blocks behave: which controls are disabled by default and hidden from the UI, which ones are visible. It also defines the color palette, the default typography presets (font sizes, font families…) available for editors to pick from.
In the example above, the theme is forbidding the use of dropCap in the UI for all blocks making use of that setting.
Settings can also be more granular and contextual to specific blocks to supports use-cases like disabling colors everywhere but enable them only for a specific block. For such use-cases, we just use the block name to define specific settings.
The styles section on the other hand is about defining the design language of the theme. It allows theme authors to define the default color, font size, line height, font family, link colors, heading sizes… At render time, it is translated into a CSS style sheet that is injected into the frontend and the editor.
In this example, I’m defining the color of all link (a) elements across blocks.
In the same way, I can override these styles for a specific block. In the following example, I set the default background for buttons as blue with a white text color.
It’s also important to note that by using Global Styles and theme.json, editor styles generated from the theme config will be automatically loaded into the editor. Also, the presence of theme.json file in your theme directory is an indicator for the block editor to embrace a simpler markup for some blocks like the group block.
Block authors
The shared settings and styles above work across Core blocks but third-party block authors can also support these in their blocks.
Settings
In order for a block to embrace Global Settings in its editor UI, a dedicated React hook called useSetting can be used:
// Somewhere in your block's edit function.
// Retrieve the value of the dropCap setting
const isEnabled = useSetting( 'typography.dropCap' );
if ( ! isEnabled ) { return null };
Return <ToggleControl ... />
In this example, we’re retrieving the value of the typography.dropCap setting and if the dropCap is enabled, we show the corresponding UI to allow content creators to use a drop cap.
That’s it: all settings can be accessed in the exact same way. For a complete list of settings available to the block authors, take a look at this reference.
Styles
Global Styles on the other hand should work mostly by default in all blocks thanks to the CSS cascade. Global Styles work by generating and injecting CSS based on theme configuration in theme.json file or saved user configuration (for FSE themes only).
That said, some blocks opt out of the generated class name which means the global styles style won’t work in this case. For these kind of blocks, a selector must be provided in the block.json config.
For this kind of blocks, the selector will be used instead of the generated class names to generate the styles.
Note: It’s important to note that a block can actually be styled using global styles even if it doesn’t provide UI for the user to edit these styles. In most situations though, support for these customizations in the UI can be added quickly to any block, static or dynamic, thanks to the Block Supports API. Also, when using Block Supports, blocks automatically adhere to the Global Settings discussed above as well.
When can I start using these?
The APIs mentioned are available when the Gutenberg plugin is active are targeted to land in Core as stable in the upcoming WordPress 5.8 release slated for July 20. If you’re a theme or block author, it’s time to start familiarizing yourself with these APIs.
Writing software is easy, sustaining it for years is harder, and doing it for open-source software is a challenge. Here’s a story about my journey to help build the WordPress block editor from a maintainer’s perspective, a perspective probably invisible to most developers and contributors (unless you’re an open-source project maintainer).
The fun part
As a long time developer, my main motivation is to ship features, write software, and put it in the hands of the users to help them achieve their goals. So when I heard about WordPress thinking about building a new editor from scratch, I immediately understood how impactful that project could be, given the scale of WordPress and the central place the editor occupies in any CMS. Thus, I volunteered very early on to join these efforts and was delighted to learn that I was part of a small team of people who were being sponsored to lend a hand there.
Starting a project from scratch is an opportunity most developers enjoy. The early days are where the fun happens: you get to define the structure, the guidelines of the project, the code style. You get to choose the technologies to be used and participate in the early prototypes. You define the base APIs and you get to engage in early design discussions. And more importantly, for a developer motivated by impact, you get to ship software at a very high pace. It is very rewarding.
And this is exactly what happened with the block editor project. Early on, we were a group of a dozen folks showing up on #core-editor weekly meetings, we worked on several prototypes, we created the base UI components ,we defined the desired base Block API and block-based format, and we eventually achieved an important milestone where the block editor could be shared with users and third-party developers as a beta version.
The growing part
The initial release of an open-source project is one of the most rewarding moments of the timeline. Early adopters get to enjoy your work, provide feedback you can act on. People start asking for features via issues and, with your existing knowledge of the project, your voice matters. Some of them can also provide their own contributions and open pull requests. Your feedback is important there since you have worked on the original APIs and architecture of the project. Eventually, you become an expert, a public figure, you get respected (or hated) for your work, but your input becomes necessary.
For the block editor project, there were a few of us in this situation and we were able to reasonably balance receiving feedback, acting on it, and making substantial improvements and iterations to the overall project.
At this point in time, you also start to be careful about public APIs. As any WordPress feature, the block editor is meant to be extensible at its core. Third-party developers can write custom blocks, and extend the editor in a number of ways. As we started getting more users for our plugin (beta product), and even if we were on a beta period in which API changes are allowed, we had to start thinking about being very explicit there. Identify and document the API changes, and ideally provide upgrade paths and time for third-party developers to adapt their code.
The serious part
Next up in the lifecycle of the software is to actually ship a stable version. It is very hard to know exactly where to draw the line and stop the iterations on the beta product and make the jump — but eventually the time comes, and with it comes the age of maturity. Your software starts to be used by a large number of users and extended by a number of developers (depending on the scale of your market/community).
In terms of software features, this is generally a small step, the software doesn’t change so much for its stable release. But in terms of flows and dynamics of the project, this is a huge turning point.
The first challenge you’ll face here is dealing with backward compatibility for all the APIs you have been building so far. The backward compatibility strategy differs from one project to another, but in general this means that you’re engaged to maintain these APIs working properly for a long time. You can’t risk breaking the user/developer’s trust.
For an NPM library (or any other developer dependency), you have the luxury to rely on semantic versioning. Meaning, making breaking changes to the API is allowed if you make sure to update the version of your software accordingly. This communicates your intent to your users. This is made possible because updating a dependency is an explicit action that a developer takes on its development environment, and thus the developer can make sure their extension/product still works with the new version before pushing an update to the production environment. Nonetheless, library authors try to avoid breaking changes as much as they can or at least reduce their frequency. The React team for instance only releases an incompatible version every couple of years or so, and when they do they make sure to provide a simple upgrade path for their users.
For WordPress, the story is very different. While WordPress does make some small breaking changes from time to time, its goal is to do none. This is understandable because updating WordPress is an operation users perform on their production websites. These updates can also be done automatically without any manual intervention. And with the scale of WordPress (38% of the websites at the time of writing), it can’t afford to break millions of websites because of a change incompatible with third-party plugins.
So when the block editor landed as stable in WordPress 5.0, we knew we were making strong commitments towards supporting its APIs for a long time and this had a non-negligible impact on the development pace. It is very hard to quantify exactly but, for every pull request, a very careful review is required to check the impact on existing sites and APIs. This can also mean intentionally delaying features and enhancements until there’s a better moment/way to introduce the API changes with the minimum impact on existing sites and plugins. Spreading changes across different releases is a common strategy to help communicate changes and give time to third-party developers to adapt their code before actually making the required changes. As an example, it is no surprise that WordPress still uses jQuery 1, but it’s important to understand that a migration process throughout several major releases is underway.
While the impact of the backwards compatibility strategy on the development process and pace was something we anticipated properly, what came as a surprise to me was another consequence of the stable release: we put the software in the hands of millions of users while the size of the group that was referred to as the “experts” of the project remained unchanged. This led to a very high influx of notifications, direct messages, mentions on issues, pull request reviews, requests to discuss technical discussions and feature proposals. We became the bottleneck.
I’ve been reading a little bit about the subject (I strongly recommend these two posts: The city guide to open source by Devon Zuegel and (Open) source of anxiety by Henry Zhu) and this seems to be a common problem in successful open source projects: the people most equipped to move the project forward by undertaking big changes and improvements are the less likely to have time to actually make these changes.
What’s next
This is the current challenge we’re being faced with. How to make sure we help the community as much as we can and move the big upcoming projects forward: full site editing, widgets screen, navigation block, global styles — just to name a few.
WordPress has an amazing community. More contributors are embracing the vision and gaining expertise. I’m confident that, with the participation of all, we’ll make it happen and climb another step in our journey to democratize publishing.
tl;dr: With BlockBook, you can build, test and showcase your static WordPress (aka Gutenberg) blocks in isolation. It can also be used to test the block styles of your themes. In short, it’s going to change how you develop and style blocks. If you’re a block developer, BlockBook is a must. You can see a working BlockBook here.
Since the initial release of the block editor in WordPress 5.0, the community has adapted rapidly. Hundreds of block plugins have been released.
Also, a dozen scaffolding tools to quickly generate or build your blocks are popping up. This includes the official tool @wordpress/create-block led by my friend Grzegorz Ziółkowski.
Some challenges remain unsolved, though:
1- Creating blocks in the context of WordPress is not straightforward. In addition to requiring a running WordPress environment setup with your plugin’s code, which can be tedious to get right, every time you make a change to your block, you need to refresh your editor, potentially facing invalidation issues, recreate blocks, apply changes and ensure the result is correct. This is hard and takes a lot of time.
2- Another frustrating aspect is theme testing. Blocks are supposed to work with all themes, and themes can style blocks to make them fit the theme identity more closely. For block authors and theme authors alike, this is challenging. There’s a need to install multiple themes, switch between themes, and — if you’re a block developer — ensure your styles are agnostic enough. And, if you’re a theme author, it’s hard to navigate and try out different blocks as you make changes to your stylesheets.
3– For users, it is very hard to discover blocks properly, to try them out and make informed decisions about what plugins to install or not. The Block Directory is a good step in this direction but navigating blocks is still an inconsistent experience across plugins.
Meet BlockBook
How can we solve these issues and make building and sharing blocks easier for developers and users?
If we think about it a little bit, blocks are reusable units that can live on their own outside of any context, they can be edited visually, and they produce markup. They have in fact a lot in common with React Components. They are super-powered React Components.
Well, we already know that, for UI components in React, a tool called Storybook exists which allows developers to solve the kind of issues mentioned above.
That’s where the idea came from: what if take Storybook principles as an inspiration and apply them to build an environment where we are able to build, test, document and share blocks in isolation from any context and in a consistent and very performant way?
BlockBook is available as an npm package, so adding it to your own projects is straightforward. (Read the documentation for more details). Once the setup is done, you can run it locally using a simple command: npm run blockbook:start
And you can host it as a static website on GitHub pages (or any host).
BlockBook and WordPress
There’s a lot we can do with BlockBook in WordPress and, while this is built as a personal project for the moment, my intent is to propose it as an official @wordpress package and use the Core BlockBook as official documentation for the Core blocks.
Also, you can imagine that, with the Block Directory project and its guidelines, it should be possible to automatically build and host a BlockBook per block plugin (or across plugins) on WP.org (and potentially even themes).
These are just some of the ideas of where we could take this next.
Limitations
BlockBook works well with projects using wp-scripts as a build setup. For alternative setups, it may or may not work at the moment. Please feel free to try it, and share feedback.
Contribute
And as you’d expect, it’s open source, GPL-licensed, and you can help shape the project in the GitHub repository.
As mentioned in a previous blog post, when I started programming, testing software was a rare practice. And like a lot of folks (even if they might not admit it), when I started reading about automated testing… it felt like a waste of time for me.
But as time passed and while working on different kinds of projects, I started learning about the importance of testing to ensure the quality of software over time.
During this journey, my conception of automated testing evolved a lot. Younger, I had a tendency to read tech articles like bibles, and if any famous developer suggested that unit tests were the panacea, I would think it must hold true for any project, any code and if I didn’t reach the 100% coverage, I was doing it wrong.
As you might expect, my opinion changed a lot over time, and I have fewer certainties these days. What I really learned is that any policy, for any given project, is dependent on the priorities and the context of that project and that there’s no such thing as best practice or anti-pattern that can hold true across contexts. This applies to tests as much as any other development-related practice.
So when it comes to testing, I developed some intuitions that I wanted to share in addition to some of the reasoning behind them.
Don’t take my word for it though and build your own intuitions depending on your specific context.
In general, when starting a new project that still needs time to prove its value with no guarantee to last in time, I rarely write tests. My main priority is to make sure the experiment is valid and the proof of concept is worth it before investing in tests.
When projects mature, involve several developers have come and gone, it becomes very important to invest in a strong testing policy. What I consider a good testing policy these days is a mixture of a number of unit tests and end-to-end testing for the critical paths of your project.
I consider testing small components, functions, and straightforward code as mostly useless. The tests are often a duplication of the code logic and need to be updated every time you make a change to the production code. That said, initially, it’s not always easy to identify this kind of tests but we shouldn’t be afraid of removing tests if they prove to be useless and problematic to maintain. Removing tests shouldn’t be taboo.
I don’t abuse unit tests, they shine for complex code with a well-defined API. Functions that have clear inputs and outputs and where the path from inputs to outputs is not straightforward but requires advanced logic.
I prefer end-2-end tests in most cases as they test the behavior of the software. End-2-end tests mean different things for different kinds of projects though. For a website, a mobile application, or any user-facing software, the end-2-end tests are tests simulating the user interactions on headless browsers or device simulators. For packages, libraries, these are referred to as integrations tests. They often resemble unit tests except that they address external APIs essentially.
End-2-end tests for user-facing applications are very important to avoid regressions but while the tooling has made substantial progress in the last couple years with things like Docker, Puppeteer, headless browsers… these remain fragile and they generally take a long time to run, so it’s important to be smart about what you’re testing, focus on the critical paths without forgetting about the maintainability cost of these tests.
On several occasions, we can be tempted to rely on tests based on generated fixtures and snapshots to quickly increase the coverage. Fixture-based tests are tests that perform a complex operation multiple times by slightly changing the inputs and saving a snapshot of the output. I’ve seen them being used for: navigating into pages and capturing the HTML of specific areas of the page, parsing hundreds of documents, and saving the result. I would personally avoid this type of test as much as possible. While they do increase the coverage very quickly, they fail to reach the main goal of the testing policy: ensure software stability. The main reason for this is the human aspect: The expected results from this kind of tests are often unclear. When an error happens, developers get confused about whether the changes to the fixtures are expected due to the code change they performed, or whether it’s a real failure. Over time, they develop habits to regenerate the fixtures when the tests fail without giving it too much thought. I don’t blame the developer for that but I see the test without clear expectations as the main issue here. Again, in these cases, removing tests shouldn’t be seen as a bad practice.
These are some practices and intuitions I’ve developed over time and I’m certain that I’ll continue to reconsider some of these and build new ones. If I have a single piece of advice to give, it would be to always consider the project’s priorities and context for defining policies, best practices, and anti-patterns. These change from project to another and evolve over time on the same project.
For WordCamp Europe Online Contributor Day, I’ve prepared a post to onboard new contributors. I’ve used notion.so for this. It’s great software, but a friend of mine rightfully commented about it being a missed opportunity to use Gutenberg.
What if you could
open your browser,
type a URL,
and immediately start typing in Gutenberg.
and when you’re ready to share your content, click a button and send the sharing link to your collaborators.
the content will be encrypted,
only your collaborators will be able to read it,
even the web application’s server can’t decrypt it
then, you can work on the content live with your collaborators and potentially persist it to the cloud once done.
This is exactly what my new side project is about. Try it for yourself: https://asblocks.com.
Supported features
It’s still a young project but already packed with features
End-2-end Encryption.
Live collaboration/editing.
Read-only link.
Dark Mode.
Supports almost 30 Gutenberg blocks.
Cloud persistence.
Next features on the roadmap include:
Comments.
Live Chat.
Selection/Caret indicators.
Document outline, counts.
Local save button.
Local storage persistence.
Notes
The live collaboration conflict may contain some small bugs for the moment.
This is inspired by excalidraw (Similar idea applied to diagrams).
And WordPress
As you might already know, one of the next phases of the WordPress Gutenberg project is to bring collaborative editing to Core. AsBlocks is an important step in our journey to understand live-collaboration and bring it to Core. A WordPress plugin based on AsBlocks’s technology is also on the radar.
Open-source
Last but not least, it’s GPL same as WordPress and you can help shape the project on the Github repository.
You might not know yet but WordPress is working on a project called Full Site Editing with the goal of allowing users to edit any part of their site in a single and coherent way using the block editor.
The project is based on a new kind of themes called “block-based themes”. If you want to learn more about the project and these themes, I’d encourage to check out the following links:
Full Site Editing and block-based themes are still very experimental, and since I’m actively working on the project, I decided that the best way to test the work we’re doing is to use a block-based theme on my own blog (the experiments is already successful as I’ve managed to discover some bugs).
I’ve now switched the theme of my site to use the TwentyNineteen theme being developed on the theme experiments repository.
I’m not going to lie, don’t do that unless you feel adventurous. The project is still heavily being iterated on. It is lacking a lot of fundamental blocks and UX interactions are not polished. That said, I was very pleased to be able to just open the Site Editor page and have a representation close to the frontend where I tweaked some parts of the footer/header without having to dive into several menus, widgets, customizer and some settings page. I look forward to being able to hide these pages entirely from my admin as I don’t need them anymore.
This post presents different performance improvement and monitoring techniques that can be used in any React/Redux application.
Akin to the React Concurrent mode, it also introduces an async mode for Redux applications where the UI can’t be windowized.
WordPress 5.0 included a new block-based content editor. The editor is built as typical react/redux web application with a global store and a tree of UI components retrieving data using state selectors and performing mutations using actions.
Note To be more precise, The WordPress block Editor (called Gutenberg sometimes) uses multiple stores, but for the purpose of this post, we can simplify and assume it uses a single one.
Relying on the shoulders of giants: react-redux
The main performance bottleneck for most React/Redux applications is the fact that any change in the global state can potentially trigger updates in all the components subscribe to the store updates.
Fortunately, the simple fact of using react-redux is enough to solve most of these performance issues. The library is highly-optimized out of the box.
In the example above, each time the global state is changed, the mapStateToProps function is executed to compute the updated props passed to the underlying UI component.
By default if the computed props (block in our example) don’t change, the underlying component (Block in the example) is not re-rendered.
It’s important to note that react-redux‘s connect function performs a shallow comparison to check if the computed props changed or not. This means generating new object instances in mapStateToProps should be avoided and selectors (getBlock in our instance) should ensure that it returns the same block object instance on each call unless an actual change to the block object has been made.
// Bad: a new block object is generated on each render, causing rerenders even if the block name didn't change.
const mapStateToProps = state => ( {
block: { name: getBlockName( state ) }
} );
const MyBlockComponent = connect( mapStateToProps )( BlockComponent )
// Bad: Doing the same thing in a factorized selector is bad as well. It is strictly equivalent.
const getBlock = ( state ) => ( { name: getBlockName( state ) } );
Track component re-rendering
The first thing you should track when you notice performance degradations is whether you have components being re-rendered too often and without any meaningful prop change.
To do so, install the React Developer Tools browser extension, check the Highlight Updates option and notice the flashing borders around all the components being re-rendered. You can also inspect a given component and check which props are changing when it’s re-rendered.
Proxifying event handlers
Often, when react-redux‘s connect function, you end up providing event handlers that depend on props. For components optimized for purity (don’t render when props change), this can lead to unwanted rerenders because the event handler end-up being recreated on each render.
To address this issue @wordpress/data implemented its withDispatch higher-order component (equivalent to connect) with the idea that we only care about these event handlers when the event happens (click on a button…), so instead of recreating the event handler on each render, withDispatch provides proxies to the actual event handlers, these proxies instances don’t change per render and evaluate the actual event handlers when they get called. The assumption here is that the event handlers list won’t change depending on the component’s props.
Note that the data module offers now useSelect and useDispatch React hooks requiring a different technique to optimize event handlers that needs data dependencies.
Optimize the selectors
Now, that we ensured that our components re-render only if necessary (one of the props changed), we started monitoring our application to find the bottleneck.
When building an editor, one of the most important interactions you’d want to optimize for is “typing”. When quickly typing in the editor, the user shouldn’t notice slowness, the feedback (character being printed) should be immediate. Using the Chrome Performance Tools, we started monitoring the keypress event duration.
Keypress event monitoring
Quickly, we realized that the more content the editor is showing, the more rendered components we have, the worse the typing performance gets. And even if the components were memoized, their selectors were still being called on each change even if their result didn’t change. Selector calls quickly became the bottleneck of the editor’s performance. Our next step was to optimize the performance of our selectors.
The most important technique to be aware of here is what we call function memoization. Memoizing a function means that a function is not executed twice unless its inputs (arguments) change.
In the React/Redux world, there are a number of libraries allowing you to memoize selectors, some of the most used one being reselect and rememo.
Note Memoization is a good technique but it’s important to monitor and measure the performance improvements. Start by memoizing the less-performant selectors. Memoization is also a technique that can be used to avoid creating new objects/array instances if the inputs are the same (which then prevents components from re-rendering if not necessary).
Reshape the state tree to avoid high selector cache invalidation rates
In a typical Redux store, you’ll have some data that changes very often and other state values that don’t. It is important that these two things stay separate in the Redux state tree for better selector performance.
Let’s take the following blocks redux state as an example:
If we want to optimize the selector to avoid computing a new array if the state stays the same, we’d write something like:
const getBlockIds = createSelector(
state => state.blocks.map(block => block.id),
state => [ state.blocks ]
);
The second argument here tells the selector to avoid recomputing the array if the state.blocks value didn’t change.
That’s a good first step, the problem though is that we don’t reorder or add new blocks as often as we change the block attributes, the selector value won’t change, but the whole “blocks” state will causing the selector to recompute again.
This issue is solved by identifying what are the parts of the state that change often, and the ones that change less. Ideally, we should group all state values that change “together” under the same state key.
Here’s an example of a rewrite that can lead to better performance:
You’ll notice that now the array returned by getBlockIds won’t change unless the order or the list of blocks is actually changed. An update to the attributes of blocks won’t refresh the value returned by that selector.
Async mode
Memoizing slow selectors did have an impact on the performance but overall, the high-number of function calls (selector calls) was still an issue even if a single function call is very fast. It became apparent that instead of optimizing the selectors themselves, our best bet would be to avoid calling the selectors entirely.
This is a typical performance issue in React-Redux applications and the approach that most people take to solve is using windowing techniques. Windowing means that if a component is not currently visible on the screen, why do we need to render it to the DOM and keep it up to date. react-window is one of the most used libraries when it comes to implementing windowing in React applications.
Unfortunately, windowing is not an option we can consider for the block-editor for multiple reasons:
In general, windowing works by computing the height of the hidden elements and adapting the scrolling behavior to the computed height even if the elements are not rendered on the DOM. In the case of the block editor, it’s actually impossible to know or compute the height of the blocks and their position without really rendering them to the DOM.
Another downside is the A11y support, screen readers tend to scan the whole DOM to provide alternative ways to navigate the page without relying on a notion of “visible elements” or not. Something that is not rendered on the DOM, is something you can’t navigate to.
For these reasons, we had to be a bit more innovative here. While the initial rendering of the components had a cost, the most important thing for us is to keep the UI responsive as we type, and the bottleneck at this point was the number of selectors being called.
That said, in a typical block editor, when you’re editing a given block, it is very rare that an update to that block affects other parts of the content. Starting from this hypothesis, we implemented the Async Mode.
What is the Data Module’s async mode?
The Async mode is the idea that you can decide whether to refresh/rerender a part of the React component tree synchronously or asynchronously.
Rendering asynchronously in this context means that if a change is triggered in the global state (Redux store), the subscribers (selectors) are not called synchronously, instead, we wait for the browser to be idle and perform the updates to React Tree.
It is very similar to the Concurrent mode proposed by the React Team in the last React versions. The difference is that React’s concurrent mode use setState calls to defer the rendering but in our case, we want to defer the selector calls which in the call chain happen before the React setState calls.
How did we apply the async mode to the editor?
Our approach was to consider the currently selected block as a synchronous React component tree, while considering all the remaining blocks as asynchronous.
It is possible to opt-out of Async mode for a given block. This can be useful for blocks that rely on other blocks to render properly. (E.g. Table of content block)
At this point, our biggest performance issues were solved, but still, we wanted to continue improving the editor’s performance as much as we can.
What’s next
Building a performant editor is a challenge, an on-going one, and unlike regular features, performance is not something you can implement and forget about, it’ts a continuous workflow. If you have to take something out of my lenghty blog post, I hope it’s this:
Identify the flows you want to optimize,
Identify the bottlenecks of your application,
Measure constantly, ideally in an automatic way on each PR,
Don’t learn and apply optimization techniques blindly, instead, read about them, know their existence and try to adopt the best techniques based on your particular use-cases.