With the release of ChatGPT, everyone in the tech industry – from CEOs to managers and engineers – is wondering how these new tools and technologies can and will disrupt their lives. Will it make their jobs obsolete, or will it increase productivity? Should we fear it, or embrace and benefit from it?
Developers often think about how AI can help them do their jobs more efficiently. Tools like Copilot, Ghostwriter, and ChatGPT are being used more and more as coding assistants. One might ask how much further they can go and which developer duties they can absorb. As a result, some could say that developers’ added value will decrease, but if you’re more optimistic, you might think that it will free developers from boring and repetitive tasks, allowing them to focus on more important/difficult tasks. The reality in the mid term is that it is probably going to be a bit both. We’ll be needing less developers but the ones left are going to be using AI heavily to get the job done.
Content creators are also heavily impacted, and several actors in the field are racing to see how much AI they can integrate into their software to speed up and improve the content creation process. Notion with its AI assistant, Shopify with Shopify Magic, and Automattic with Jetpack AI are all experimenting, but we’re still very early in the game, and no one really knows what will or won’t work.
That said, when a disruptive technology is developed, often, we need to think outside the box to really understand the impact. It’s not enough for a developer to try to apply the technology to their day-to-day workflows and habits. It’s not enough for CMSs to be asking themselves about how to best use AI to improve the creative process. One has to ask themselves about the real purpose of their work on human life and consider its broader impact on social, ethical, and cultural implications..
My job today is to build software used for content creation and building websites. But why do we build websites in the first place? It’s not a goal in itself. The real goal is to provide information to users in the most efficient and organized way. A user goes to a website to accomplish tasks such as booking a reservation, finding information about a product, or learning about a service. So, the question I have been asking myself is whether AI can allow us to help users with these tasks in the most effective and easy manner for the user.
Are websites still the right tool for that purpose?
Why would I open a browser if I can just talk to a bot and get an instant customized reply? Yes, people have been talking about bot-first UIs for some time now, but bots have always felt inefficient. For example, I can’t ask Siri about the price of a facelift surgery in my hometown plastic surgery clinic and get an instant reply. Siri is both limited technically and only understands a very limited part of the natural languages. The clinic had to build a website for me to get all the information I need and Siri is just going to point you to that website. But ChatGPT doesn’t have these boundaries. What if there was a way for the clinic to feed all this information into a GPT-powered bot that anyone can call and get all the up-to-date information? What if that way of feeding an AI was an easy-to-access standard that any small business could use? What’s the place of websites in such a world?
Another question worth asking is: who provides that standard, and where will the data be hosted for the AI to consume? Given the costs to train and run large scale AI, the most logical answer is that BigTech (Google, Microsoft, Facebook) have a certain advantage there. They also already have access to a huge data set of all the business and information. How much people are going to trust these companies with all their data?
In conclusion, this post asks more questions than it answers, the truth is that no one really knows the actual answers at this point. All we know is that the disruption rate is increasing exponentially over time. Change is coming to us sooner that one might expect. A world without websites dominated by big AI models from BigTech sounds plausible, it’s not very compelling but it also seems inevitable. My conviction is that websites need to evolve, there might a be place for them in that world, but it becomes more niche driven by nostalgia and self-esteem. Websites will enter the museums of the future.
One response to “The place of websites in a future dominated by AI”
Major questions indeed, thanks Riad for the article.
Many end-users indeed mistake the means for the end-goal – true for a website as you underline the goal is first to provide infos. Even worse for SEO where endless people pitifully compete for positions in search engines, whereas the only goal is actions (readings, registrations, downloads, buying…) – not even visits per se, let alone where links to your sites stand in a page.
What will globally remain of the websites we are creating, and where WordPress will stand in the future, are major questions many of us struggle to ponder.
One major question though, out-of-the-box as you underline, is where, how, and based on which criterias decisions from AIs will be taken. That society as a whole, and politics in the noble sense with hopefully enlighted leaders, should be concerned and have a right to know what happens with personal data and AI decisions, is a major issue, and not only for IT people. Not to mention how added values and wealth created by such life-changing systems will be distributed. In order to not have such progresses become harmful and a source of destruction, some thinkers and researchers say we should consider a society tax on bots, whether physical or virtual as AIs, so that profits do not become more and more concentrated while making many jobs obsolete, and hence some people irrelevant. If we could move to personal data not wildly and illegally harvested by faceless corporations but paid-for, it could evolve for the best. Are we one day going to see official bots – tools and services – complying to such values and behaviors, why not tracked in a “republican” public blockchain ?
Right now, and many concrete examples in HR and recruitement were scary enough to show it’s already happening : we are at risk to have black-box decisions imposed on us, ie we know the input data, and the outcome, but nobody even among people who actually wrote the AI rules/algorithms seems to be able to explain why it came to that result. That means more transparency requirements just like we have in open source. Or else one more dose of Black Mirror anybody ?
A very relevant explanation on AI impacts from a media person who really stands out was made in John Oliver’s show “Last week tonight”, with his signature mix of seriousness and fun. Youtube link https://www.youtube.com/watch?v=Sqa8Zo2XWc4 but if filtered out anybody can look up “Artificial Intelligence: Last Week Tonight with John Oliver (HBO)” in YT.
Should I create a website to discuss this further ? But who will actually create it and who will be in charge of it contents ? 🙂
Pierre, WP Rennes.