Key takeaways
AI is not going away
Despite bubble concerns, enthusiasm for AI remains strong and its use is increasingly expected.
Limited regulation but still plenty of risk
Still important to think about how AI use could impact your legal and professional obligations and affect your business.
Be wary of AI generated content
Always investigate and fact check content if you do not know the source and do not share it.
We’re now almost a couple of months into 2026 and despite the suggestions at the end of last year that the AI bubble was about to burst we see no sign of enthusiasm for its use declining. Nor do we see any signs of truly effective regulation of the use of AI. Investment in AI platforms and companies continues at an almost feverish rate, and meanwhile the chief executive of at least one successful AI startup (Anthropic) warns of the risks of AI to humanity without adequate restraints.
Is the answer to these risks proper regulation? If you look at approaches to AI regulation around the world then, with the exception of South Korea’s AI Basic Act (which has just taken effect and requires AI content to be digitally watermarked) and the EU’s AI Act (which restricts “high risk AI” and may or may not be delayed or amended before it comes into force this year), the global approach to AI regulation is piecemeal at best.
Here in Hong Kong for example, our AI rules comprise only guidelines and best practices from the Hong Kong Privacy Commissioner or those put forward by an industry regulator. Often these guidelines carry limited weight since there’s little or no penalty for non-compliance.
If you have been working in technology for a long time you’ll know that regulation comes long after innovation. As such, the approach to regulation of AI is reminiscent of how rules and regulations evolved during the evolution of the internet.
The technology space typically operates in a cycle of innovation, enthusiasm, investment and adoption, and only later is this followed by concern, fear of negative impact, possible or actual harm and eventually regulation. We saw this with the dot-com boom and later crash, which saw the internet evolve from being a brand new way to communicate and collaborate into the indispensable and ubiquitous network that it is (creating a few very powerful tech giants on the way). We are witnessing the same happen now with social media, which went from being a way to connect with friends and contacts with similar interests into the advertising, data-mining and influencer-driven giant which has become the primary source of contact, news, and sometimes misinformation for many, and a real target for regulation over recent years (to the point that some jurisdictions are seeking to ban minors from accessing social media).
So where is AI on this trajectory? In a relatively short time generative AI has moved from being a convenient assistant that can complete complex tasks on your behalf via just a prompt into a technology that has the capacity to cause incredible advances in human development… or incredible harm. Generative AI has already in just a few years shown how easy it can be to harness and create incredible knowledge, expertise and research, using just some human input.
Except that it appears to be the human input which thus far has been leading to generative AI causing potential loss or harm. If it is possible to create something that can make money or achieve some other gain for free (or almost for free), then what is to stop thousands if not millions of people from doing just that.
Or to put it another way, if AI can be used to generate a fake video or image which can then be monetised, such as a nude deepfake, what is to stop anyone from doing the same?
Unfortunately, at present, seemingly very little. This is illustrated by the volume of AI generated material available which is not only of poor quality but has been deployed to create misinformation or generate easy views (and therefore revenue) on social media platforms. This “AI Slop” is slowly taking over social media platforms and the internet, and it poses some serious problems for genuine content creators, IP owners, businesses and individuals.
Armed with a handful of genuine images and an AI platform with limited guardrails you can create a video or photo of somebody doing anything. In fact, you can create a video or photo of anything doing anything. This causes headaches for cybersecurity professionals who have to deal with the fact that’s increasingly difficult to trick humans into falling for AI generated misinformation leading them to unwittingly comprise systems causing significant financial and reputational loss.
It also poses significant problems for those taking part in democratic elections where AI may be deployed to win votes, for those authors and artists who work in the creative industries who have seen AI used to copy and reproduce their work, and indeed for anyone growing up in the age of social media and generative AI who has been bullied or had their likeness or data misused by AI against their consent via the use of AI tools.
Modern regulation relating to the collection and use of personal data was heavily influenced by the use and misuse of personal data during the early internet age; regulation of social media has been influenced heavily not just by the impact it had on both personal data collection and advertising but also the impact it has had on society, the rise of misinformation, and minors.
Regulation of generative AI, if and when it does arrive, is going to have to tackle all these issues, as well as grasping the impact of AI in the coming years and the possible development of “all powerful” superintelligent AI. This is a tall order for any law or regulator to fill, especially when there are competing geopolitical interests at play in the rise and value of AI and the AI arms race. However, some meaningful regulation is needed, and 2026 may be the year where the establishment of such regulation must become a priority.
