As we closed out 2021 and ring in what we hope to be a bright and fulfilling year, it’s time to reflect on the trends that will likely shape the months that lie ahead of us. We live in a world experiencing major transformations and exponential trends, and we’re likely to see significant developments in the new year.
So what might those changes be? Here are a few predictions:
COVID slides into the background
Just as we were expecting the pandemic to fade away and become endemic, the Omicron strain surprised us yet again with a large number of mutations, increased virality and an ability to land the unvaccinated in hospitals. The fact that it hit right around the holiday season, causing thousands of flight cancellations and millions of upended plans, made its psychological impact even worse. But, on the positive side, this too shall pass. Successive mutations will likely become less deadly and eventually go the way of every other pandemic. Perhaps Omicron itself is the last major mutation. Time will tell, but it is likely that we see the end of COVID as an economy-stopping phenomenon by the end of 2022.
A Changed World: Telehealth, Remote Work, reduced Social Contact
While COVID will slip into the background, life will not return to the way it was in 2019. More than two years since this fiasco began, it has caused such fundamental shifts in many aspects of life, that to now expect them to dissolve and dissipate as if Thanos just snapped his fingers, is unrealistic. For example, companies are already cutting back on large real estate leases, realizing that the entire workforce is unlikely to return to work in person. And according to Fortune, 74% of Fortune 500 CEOs expect a reduction in their real estate footprint.
The pandemic also made us comfortable with remote consultations with our medical health professionals. In fact, the number of patients preferring Telehealth appointments has increased from less than 40% pre-pandemic, to more than 60% now. One can imagine combining a broad range of in-home tests with online consultations to help diagnose an array of conditions. The missing piece in this equation so far is in-home testing; I expect this is an area of significant growth for the future. Specialized devices combined with AI-based smartphone apps that can detect patterns should be an exciting area to watch.
Fewer visits to the office also mean reduced social contact in general. The office is, among other things, programmed socialization. Time spent with the nuclear family will increase, but for many, the number of new people they meet in real life will diminish greatly. A less-friendly world, even more challenged with mental health crises appears on the cards.
Call it what you want, the Metaverse is on its way
As people are driven to interactions through digital means while at home with reduced social contact, time spent online will continue to remain at high levels and increase over the long term. According to Data Reportal, the average global internet User now spends six and a half hours online every day.
While we won’t be walking around all day with VR or AR headsets any time soon, yet, one application at a time, we will dip our toes into the Metaverse. Some might argue that the idea of the Metaverse really didn’t need a new name at all. And they have a point. But no point in fighting a name that is likely to stick. The “Metaverse” is how we will refer to higher fidelity, connected digital experiences within which rich social and commercial interactions are possible with both AI and humans. Another way to look at the Metaverse is as an extension of existing trends; improvements in graphics realism and gaming interactivity, the virality of social networks, the spread of AI and visualization technology, high-speed networks and vastly improved digital financial technologies. When you have the piece parts available, someone is likely to put them together. Voila! Metaverse.
Will we spend more time in the Metaverse? Yes. Despite the fact that we love to hate social networks, we vote with our attention, and are presently spending nearly two and a half hours every day on these sites. If our digital experiences and interactions become even richer as things move to the “Metaverse”, will this number go up or down? Quite likely, up.
Crypto is the new software infrastructure
Bitcoin pioneered the idea of a Blockchain; a decentralized, unalterable, secure and encrypted store of data that includes data provenance and change histories. Think of this as a ledger no one owns, that is hard to hack but slow to transact with. If the Bitcoin blockchain was a specialized, decentralized database, Ethereum then built on these ideas to implement a “virtual machine”, or a virtual computer that could run on a decentralized network. Imagine a computer cobbled together with the resources of all the individual computers that connect with each other to run the peer-to-peer Ethereum protocol. With the idea of programs (called “smart contracts”) running atop this virtual machine, one could see the early beginnings of a “world computer” emerge, whose resources would not be owned by a trillion dollar corporation, but by millions of users who profit collectively from the use of this system. How would this change the dominance of Cloud Vendors? How could this prevent the exploitation of individual and small business data? Overall, the decentralized infrastructure for computing and storage is an important and growing trend, and one which can restore some balance to a tech ecosystem where three or four giants represent such a huge percentage of the entire sector’s market capitalisation.
The innovation stemming from the crypto ecosystem is not limited to blockchain and a P2P “world computer” that can run code. There’s much more, such as side-chains (Layer 2) that accelerate the slow performance of blockchain, new blockchains (Layer 1) that are inherently faster, non-fungible tokens that can represent an individual instance of an asset, like paintings and land (and unlike currency, where one $1 note is equivalent to another $1 note). Also, there is the idea of automated governance implemented via algorithms instead of people and management teams; Decentralized Autonomous Organizations.
On top of all these innovations, of course, is the currency, or the token itself. By giving Users a token you can incentivize those who hold a shared belief in what you are collectively building. The incentive accrues from a belief in future value and not from the expenditure of mountains of capital. Incentives exist in every industry and are used to accelerate growth. All of us have seen those “buy one, get one free” signs, been offered $50 in free stocks when we sign up at a Brokerage, or enticed with a free ticket. But all these incentives generally require the expense of a hard currency and can only be provided by those who already hold a large amount of capital. Entrepreneurs who are just starting out, startups and smaller companies may not be able to match this style of land grab via “dumping”, and are hence at a disadvantage even with a superior product.
However, if Users believe in the superiority of the product being produced by these Entrepreneurs and accept a “token” that enables economic exchange within the product or ecosystem, they may engage with the product and help it grow. They would be incentivized by the knowledge that in the future, as usage increases the value of the token they hold will also increase. Today, the user of a search engine or social network, for example, gets no such benefit. If you were an early adopter and helped build a monopoly, then, in the immortal words of Douglas Adams, “So long, and thanks for all the fish.”
Developers are moving to crypto ecosystems in droves, and this will transform the future of application development, and indeed, the tech industry. This is a space rich with new ideas and many smart builders who will ensure that their work meaningfully impacts the future. In 2022 and beyond, a growing number of applications will be built on “crypto rails”.
No-Code, Human-AI collaboration and Automated Idea Generation continue to improve
Personal computers first came out in the late ‘70s and achieved great popularity by the ’80s. Early systems such as the Commodore 64 or TRS-80 would boot straight to a BASIC programming environment as soon as they were turned on. Kids, parents, and indeed all users were expected to be programmers as much as they were end users. Pre-packaged commercial software was not as abundant as it is today and computer magazines would often include listings of BASIC programs that had to be painstakingly keyed into the interpreter before they would run. School curricula were designed around the assumption that using computers meant programming in a high-level language like BASIC. BBC computers and Acorn RISC machines were distributed to schools across the UK. Apple IIs and later, IBM PCs, were bought by schools in the US.
My own experiences with a computer science education suggests that the value of programming lies not just in the ability to use a computer. Instead, algorithmic thinking is important and valuable in increasing the quality of decision making and of life in general. Be that as it may, sadly, programming is not everyone’s cup of tea. How do we make non-programmers capable of using computers to build upon and manifest their unique ideas? The no-code trend, which has already taken hold, provides a possible path forward.
Through increasingly capable drag and drop tools that allow the development of hobbyist-level games, form-based applications, trained AI models and many other types of apps, the no-code movement brings the power of programming to the non-programmer. Having followed a few of these tools myself, I have witnessed substantial improvements over short periods of time. I expect such tools will enable modestly sophisticated users to develop fairly functional applications.
Will this give birth to a custom app development renaissance? A time when anyone can piece together exactly the functionality they need, without having to wait for a commercial product to deliver these features? Quite possibly. Such freedom has been a cherished dream since those early days of personal computing when kids learned BASIC programming, and Unix developers expected that every user would easily piece together complex functionality on the command line by combining small, pre-written programs. I would bet that this time will be the charm. It won’t all happen in 2022, but this trend will gain strength through the year.
Beyond Application Development, there is a tremendous amount of value in tools that use AI to help evolve ideas and suggest unique ways of looking at a problem. Many such tools are already practically usable and quite powerful. Today, they are used to design novel structures with optimal properties, assist poets with idea generation for verse, evolve paintings from neural network “deep dreams” and identify alpha-generating financial trading strategies. Open AI’s GPT-3 language model is a great example of how, with only a short prompt, such tools can author an entire article. After the initial magic wears off, one realizes that not all is as it should be in the generated output, but technologies like these can definitely supplement the human creative process and prevent creative blocks of any kind.
Operationalised Hyperwar capabilities become a greater reality
If hostilities break out in Ukraine, we might see shades of a robotized Hyperwar, albeit in the context of a limited conflict. Russia has spent years developing unmanned ground vehicles (UGVs) and integrating multiple types of unmanned aerial vehicles (UAVs) into its operations. On the other side, the Ukrainians have acquired TB-2 drones from Turkey, a platform that performed remarkably in the Nagorno-Karabakh conflict. While countries have been using drones in military operations for many years, this time will be different. We will see:
- Air-to-Air engagements: Drones have been used to engage ground targets, but the next conflict will see them take on other drones in the sky. In fact, the Russians have already tested an air-to-air weapon fired from their Orion UCAV, which successfully destroyed a target drone helicopter.
- Drones deploying from other drones: Unmanned aircraft will be used as motherships for smaller autonomous craft in the next conflict. The Turkish TB-2 has been tested as a platform to deploy loitering munitions with a range of more than 200 miles. This type of capability is unprecedented. Drones generally deploy short range, direct fire missiles. A standoff capability of over 200 miles, and that too, in the form of an intelligent, autonomous weapon, will be new and profound.
- UGV-UAV combined operations: Russia tested such capabilities during its last major exercise, Zapad. The next war will put these capabilities to the test.
- Swarms: Small numbers of Turkish Kargu drones were used effectively in Libya, but in a Ukrainian conflict we are likely to see huge numbers of such autonomous weapons all simultaneously looking for targets. Technical definitions of swarming aside, many eyes in the sky all waiting to swoop down on armor or dug in positions is horrifying… and new.
Electronic warfare, cyber and perhaps even directed energy weapons could be used in new ways. If it occurs, this war will accelerate the development of Hyperwar technologies, and in particular, drive the next level of integration between multiple autonomous platforms. Technological laggards within ministries of defense the world over, and those who still believe in the relevance of aging platforms, are likely to finally be pushed aside in the aftermath of Ukraine. Within a decade, major militaries will be Hyperwar capable. Those that aren’t, won’t remain major militaries.
All that said, while many Experts have predicted that Russia will invade Ukraine, we are holding out hope that this does not actually happen. We believe any extended conflict will be devastating for Ukraine, incredibly challenging for Europe, and ultimately not in Russia’s interest. One can only pray that greater sense prevails.
The Future will be surprising, but it is ultimately Our Choice
We are living in an exponential age when many fast paced technologies are improving so quickly, our minds can’t truly fathom the change they will bring about. Humans are used to slower, linear change and exponentials have a tendency to surprise us. In fact, if there is one thing we should be certain of about the future, it is that it will indeed bring forth many surprises. Even trends we are aware of now and think we understand will, in time, drive outcomes at a scale hard for us to imagine.
In the 21st century, we have tools and technologies at our disposal unlike any we have had before. They can be used to our benefit, or they can be used to create suffering. We are good at building and inventing, but perhaps the need of the hour is for us to apply what we’ve built to, more directly and more quickly, improve the human condition. As Ann Frank said, “ How wonderful it is that nobody need wait a single moment before starting to improve the world.” Let’s begin!