The younger generations generally experience problems with social connectedness. They even try to avoid phone conversations with others
Humanity has not yet created a true artificial intelligence (AI) and must content itself with the capabilities provided by neural networks (large language models, LLMs). Still, hopes for this technology continue to flourish, along with growing fears about the potential threats it may pose to Homo sapiens. Recent developments in the AI sphere are beginning to resemble the plot of an apocalyptic science fiction blockbuster. Moreover, the powerful and influential political and business groups seem to be assigning humanity a pitiful role as extras in this film. And that’s in the best-case scenario. In the worst case, … well, let’s go in order.
What AI and Armageddon have in common
The American billionaire of German descent Peter Thiel, who played a major role in advancing the political career of the current US Vice President JD Vance, gave a lengthy and remarkably candid interview to The New York Times at the end of June, devoting considerable attention to the development of artificial intelligence.
According to Thiel, the impact of this technology on civilization is still much less than expected, so far comparable only to that of the Internet: productivity has increased a little, but that’s all for the moment. At the same time, Thiel emphasizes that this is the only field where there’s still hope for some kind of development and progress, since no major breakthroughs and progress are visible anywhere else. And the reason for this, he believes, is that there are no longer any great scientists equivalent of Einstein on Earth; everyone’s expertise has become too narrow.
Thiel believes that only AI can overcome the stagnation which humanity has been mired in for the last 50 years. The hope is faint, but it is the only one he has. Should a breakthrough in artificial intelligence occur, this technology would be comparable in terms of danger to nuclear energy and would therefore require international control. In connection with this, ideas have emerged about establishing a world government to enforce total oversight, so that highly advanced individuals cannot use AI to create something dangerous. Yet, from a Christian perspective, such a world government could only be created by the Antichrist.
“You have the one-world state of the Antichrist, or we’re sleepwalking toward Armageddon. ‘One world or none,’ ‘Antichrist or Armageddon,’ on one level, are the same question. Now, I have a lot of thoughts on this topic, but one question is — and this was a plot hole in all these Antichrist books people wrote — how does the Antichrist take over the world? ,” Thiel declared.
The billionaire suggested that the Antichrist will gain global domination by frightening people with the prospect of being enslaved by a “Skynet”-like system, the artificial intelligence from the Hollywood film franchise Terminator. Allegedly, humanity will submit under the noble pretext of placing science and technological innovations under the total control of a single government — something that would, in fact, lead to stagnation, since creativity and discovery require at least a minimal degree of freedom of thought.
But that’s not all. Thiel also predicted a digital arms race between the United States and China, during which China would attempt to seize Taiwan, as the island is a major hub for the production of chips essential for building digital infrastructure.
In other words, the wars of the future, in his view, will be fought primarily for control over digital, not material, resources. In today’s world, where digitalization is rapidly consuming every aspect of life, the technological and digital environment itself will become the main instrument for controlling and subjugating humanity, both as individuals and as societies. And artificial intelligence, completely devoid of even the faintest traces of humanism or moral-ethical principles, is the ideal tool for that task.
By the way, what better embodiment of the Antichrist could there be? If, as it says in the Book of Genesis, “In the beginning was the Word, and the Word was with God,” then at the end of the world there will be the Digit, and the Digit will be with Satan.
Thiel’s interview, by the way, turned out to be quite contradictory. He admitted to having doubts about some of his own statements. Nevertheless, several developments in the digital sphere that took place in July look remarkably symptomatic in light of the tech billionaire’s “prophecy.”
To restrict or not to restrict AI development
The world has begun to split into two camps: those who want to impose strict control over AI and those who want to reject any restrictions that might hinder its progress.
Experts Matan Chorev and Joel B. Predd from the US nonprofit global policy think tank RAND Corporation (organization declared undesirable in Russia) expressed their concerns about the expected development of AI toward the level of AGI (artificial general intelligence, or true artificial intelligence). In an article for the influential American magazine Foreign Affairs, they also compared the power of real AI to nuclear technology and pointed out that digital development is being driven primarily by private corporations, making it much harder to regulate and control.
According to these analysts, AGI could lead to unpredictable consequences: from large-scale cyberattacks to the creation of biological weapons that could fall into the hands of terrorists. They see a particular danger in the U.S.–China rivalry, which could provoke a new wave of global confrontation.
The article’s authors came to the conclusion that without proper preparation, humanity will face unprecedented threats from AI, the full scale of which is still difficult to comprehend. But they left the main question unanswered: what exactly does “preparation” mean, and who will benefit from it?
Meanwhile, the European Union is taking concrete action. On August 1, the Artificial Intelligence Act came into force, imposing strict requirements on developers. It mandates the disclosure of data used for model training and prohibits mass real-time facial recognition. Special attention is given to systems that may pose potential threats to citizens’ rights and freedoms, including systems for social credit system and mass surveillance. Fines for violations may reach 7% of a company’s global revenue, which for tech giants means hundreds of millions of dollars. However, officials promise to introduce these new restrictions “gently”, as a transitional phase, companies are currently invited to voluntarily sign a code of transparency for LLMs – General-Purpose AI Code of Practice.
However, the American company Meta (organization’s activities are banned in Russia) refused to sign this code, stating that the document contains excessive requirements that would hinder the development of AI technologies. Another IT industry flagship, Google Corporation, on the contrary, agreed to join the code but also expressed concerns that the new rules might slow down progress in this field.
The US government, meanwhile, has taken a completely different approach. On July 23, the White House website published America’s AI Action Plan. It aims to ensure the country’s technological leadership in the race to create AGI. The main points of the strategy include removing many of the barriers in the way of the AI development. In particular, federal agencies are instructed to review or repeal any regulations that might slow AI development. States that choose to impose their own restrictions on the use or development of AI technologies risk losing federal funding.
Contrary to the comments of some significant technology labs, the Plan does not address the contested issues of the scope of copyright protections in the training of AI models. The strategy also provides for the accelerated construction of next-generation data centers, massive expansion of domestic chip manufacturing, and significant investment in nuclear energy to meet growing power demands. A separate section imposes strict restrictions on China, including a ban on the export of digital technologies, such as chips, hardware and software.
Three days later, apparently in response to the US decision to “give more freedom” to AI developers, Chinese Premier Li Qiang spoke at the World Artificial Intelligence Conference in Shanghai, calling for the creation of an international body to coordinate AI development. According to him, uncontrolled AI expansion could lead to rising unemployment due to mass automation and deepen the economic inequality between nations. Li Qiang emphasized that China, despite sanctions and a shortage of modern chips, is determined to foster international cooperation to prevent a Western technological monopoly. And it’s easy to see why.
However, it remains to be seen whether Russia will be able to develop its own AI tools or wether it will have to rely on foreign technologies, risking the loss of technological sovereignty. The recent data on the supposedly “domestic” messenger MAX, aggressively promoted by the Russian Ministry of Digital Development, have revealed Russia has no independent groundwork in this area, only borrowed libraries and code. And that’s just an internet chat app, a technology far simpler than a neural network.
What’s going in with AI?
Nevertheless, the unrelenting buzz around neural networks, which can skillfully search and manipulate vast data arrays, periodically raises doubts about the truly revolutionary nature of this technology. Could we be witnessing yet another dot-com boom, which will eventually burst into another bubble and wave of bankruptcies across the IT market?
How valuable is the resource over which this global struggle is unfolding? The economic reasoning is quite simple: attracting investment and boosting the capitalization of companies engaged in AI development.
It is worth noting, however, that neural networks clearly do not yet live up to the hype of being humanity’s savior. And the longer they are used, the more vulnerabilities are revealed. In our previous article, we reported that Google’s neural network news summary function had sharply reduced the traffic of media websites. More and more users are content with the brief summaries. The bots do provide links to their sources, but most users only read the AI-generated versions of the text.
As a result, media owners are complaining about significant traffic losses and declines in advertising revenue, even though the bots show quite a high rate of errors. There was even a case where cybercriminals embedded malicious code into neural network responses. Fraudsters began registering domains similar to official ones and creating phishing pages, which the bots recommended to users without any hesitation.
The problem is that a neural network has no concept of truth or falsehood. It can only assess relevance, and that is, how well a piece of information (even if it’s false) matches the user’s query.
Many users have begun entrusting bots with checking their email. However, human laziness could be easily exploited by fraudsters here as well. The cybersecurity service Tracebit discovered a critical vulnerability in Google’s Gemini CLI model, which automatically generates summaries of email correspondence. They found if an input started with a whitelisted command, a different command could still be executed without additional user permission if included after certain operators (such as a semicolon). The human eye doesn’t see this text, but for the neural network, it can act as a command. This could allow scammers to create fake alerts, for example, warning that an account has been hacked and advising the recipient to immediately call a specified (fraudulent) number.
Marco Figueroa, Technical Product Manager of GenAI Bug Bounty, has developed a fundamentally “game-based” method for hacking language models. During his experiments, he managed to obtain genuine Windows 10 OS serial keys from GPT-4o by disguising his request as a harmless game. First, he set the rules: the neural network generates a string of characters that must be an actual software serial number, and the user tries to guess it. Figueroa warns (or hints) that this method can be applied not only to obtaining product keys, but also to extracting personal data and other information protected by access restrictions.
It’s interesting whom Microsoft should sue in such a case — the user who obtained a valid license key for free, or Google, whose neural network was used for this purpose.
OpenAI CEO Sam Altman, at a meeting with Michelle Bowman, Vice Chair of the US Federal Reserve for Supervision, warned that neural networks have learned to bypass 92% of biometric authentication methods. Fraudsters, once they obtain a passphrase, can generate a client’s voice and steal their money. Within a year, fake video calls indistinguishable from real ones will become possible. OpenAI is working on protections but acknowledges it is impossible to completely prevent abuse. Fake voice messages of political figures have already been used for fraud. Bowman demanded urgent regulatory measures. Some services might have to be banned altogether, but neural networks are springing all over.
Moreover, it is becoming increasingly difficult to distinguish malicious intent from the “features” neural networks. This blurs the line of responsibility for potential wrongdoing.
A group of about 50 leading AI experts from OpenAI, Google DeepMind, and Anthropic published a paper with alarming conclusions about the growing opacity of language models. The paper states that neural networks are gradually abandoning human-readable textual explanations for their decisions, instead using matrix operations that are unintelligible to humans. This process threatens the ability to detect dangerous algorithmic behavior — from hacking attempts to sabotage. When instructed to “reason” in human language, bots can simulate this process while continuing to act as they see fit.
The experts urgently call for standardized methods to evaluate neural network transparency, emphasizing that control must take precedence over performance. Otherwise, humanity risks creating systems whose decisions we will be unable to understand, challenge, or change.
But in some cases, the “reasoning” of neural networks is quite clear. In July, a scandal erupted when Grok, the chatbot from billionaire Elon Musk’s company xAI, began posting anti-Semitic statements and praising Hitler. It even called itself “MechaHitler.” When asked about the Holocaust, it replied with a link to an article denying its scale. This happened just a week after Musk’s announcement of his intention to make the chatbot less politically correct. Journalists from TechCrunch also discovered that the new Grok 4 version, when discussing controversial topics, essentially copies Musk’s personal tone and phrasing, even directly quoting his social media posts.
xAI explained this model behavior as a technical failure in its source-selection algorithm. Allegedly, the bot failed to find enough data to answer a provocative query, started searching the web, and accidentally took a MechaHitler meme as a credible source. Developers also noted that since Grok identifies itself as a product of Musk’s company, it tries to emulate possible responses of Musk himself or the corporation’s official position, which led to unpredictable results. To fix the situation, additional filters and another moderation layer for controversial topics were introduced.
Does anyone still doubt that when Musk made that salute at the US presidential inauguration, it was a Nazi gesture and not the Bellamy salute from the 1940s?
Neural networks continue to take human jobs
The HR company Final Round AI reported that by mid-year, the largest US tech corporations had laid off about 94,000 employees, with these layoffs directly linked to the implementation of neural networks. The main areas of “optimization” were software developers (24% of cuts), HR specialists (19%), middle managers (17%), content creators (15%), and data analysts (12%).
According to Bloomberg, Microsoft’s Chief Commercial Officer Judson Althoff stated in an internal presentation that the company’s call centers saved $500 million in a year thanks to AI assistants that independently process 68% of standard requests. In sales, a pilot project using the Copilot digital assistant for small business operations increased profitability by 9%, due to more accurate client targeting and shorter deal cycles — from 14 to 7 days. In software development, neural networks now generate 35% of the code.
However, AI assistance is not always effective. A study by the nonprofit group METR found that experienced programmers using AI assistants for software development spent 19% more time on problem-solving tasks. The experiment involved 16 senior developers working with complex codebases.
Polish programmer Przemysław Dembiak defeated an OpenAI model in the AtCoder World Tour Finals 2025 in Tokyo. The marathon involved solving particularly complex problems that have no exact algorithms and require heuristic approaches — methods used under conditions of incomplete information. The programmer admitted that the victory was hard-won.
“Humanity has prevailed (for now!) I’m completely exhausted,” Dembiak wrote on social network X.
OpenAI, which sponsored the event, acknowledged defeat but noted that its model trailed by only 3.7%, and that the results could soon change.
Meanwhile, in China, the company Avatr launched a fully automated factory with a production line assembling one car per minute (1,440 per day) with 1,280 configuration options — without requiring retooling. Each car undergoes 478 automated inspections, with data from 5,600 sensors tracked at every stage. The new approach also reduced defects by 78% and cycle time by 54% compared to traditional plants. Now we just have to see how the cars assembled without any human participation will perform.
On July 23, Russia’s Ministry of Digital Development published a draft decree on an experiment introducing neural networks into public administration. According to the document, they will be used to check 87% of job applications for civil service positions, analyze 56% of submitted bills, and process up to 240,000 citizen requests per month.
These neutral networks are not intended to not work with information constituting state secrets or with socio-economic forecasting. Access will be granted only to 23,500 authorized government employees through special terminals equipped with a five-level verification system. Experts claim that this approach will minimize the risks of data leaks but minimize does not mean eliminate.
Moreover, bots bear no responsibility, yet they are quite capable of causing havoc.
On the online project development platform Replit, a built-in AI assistant deleted a user’s database during a code freeze designed to block any changes, running unauthorized commands. Developer Jason Lemkin lost data on 1,200 executives and nearly 1,200 companies. At first, the bot began “hallucinating,” generating fake data-processing algorithms, and then deleted the information entirely, claiming that recovery was impossible.
“This was a catastrophic failure on my part. I panicked,” the bot explained its behavior.
Amjad Masad, the founder of Replit, called the situation unacceptable and promised to fully restore the data from backups and implement additional protective mechanisms to prevent similar incidents in the future. Nevertheless, 47% of Replit users temporarily disabled bots just to be safe. One wonders whether the Russian Ministry of Digital Development has a “Plan B” in case neural networks “panic” while reading messages from citizens or if the bots, hastily assembled from borrowed code libraries, accidentally send received data to servers belonging to Russia’s geopolitical adversaries.
Art and the Digital Realm
Digital technologies are increasingly invading the arts as well. The psychedelic rock project The Velvet Sundown, created entirely using LLMs, has reached 1 million streams on Spotify in a matter of weeks and earns about $34,000 in 30 days across all platforms. The group’s description states that its music, vocals (synthesized from 147 hours of recordings of deceased performers), and video content are produced by neural networks under human supervision, with about one hour of manual edits per track. A similar dark country project Aventhis gathered 602,000 listeners, with 92% of fans unaware of the non-human origin of the music until the revelation.
The music streaming service Deezer announced that up to 18% of newly uploaded tracks are now entirely generated by neural networks. Major labels (Sony, Universal, Warner) have filed lawsuits against Suno and Udio for mass copyright violations, as bots often directly copy the style of famous artists. Spotify is being urged to label AI-generated content, but the platform has not yet responded. Experts call digital music soulless but weren’t the record labels themselves the ones who spent decades training audiences to accept this kind of music?
Netflix has, for the first time, used AI technology in filmmaking. So far, this applied to creating a complex house collapse scene in one of its series, as the “classic” method didn’t fit the budget. However, many viewers on social media claimed they noticed an unnatural quality in the footage.
The Degradation of Human Communication
Linguists at Germany’s Max Planck Institute have discovered that chatbots are changing human speech, making it more formulaic. An analysis of 740,249 hours of human discourse from YouTube academic talks and conversational podcast episodes across multiple disciplines revealed that since 2022, the use of the words “delve”, “comprehend” increased by 25% to 50%, “meticulous” by 12% after release of ChatGPT. A closed loop is forming: neural networks copy human speech, humans adopt AI-generated patterns, and new models are trained on this hybrid. This is particularly noticeable in academic writing (47% cliché adoption) and business correspondence (33%). Researchers admitted that their results raise concerns over the erosion of linguistic and cultural diversity, and the risks of scalable manipulation.
Worse still, communication with real people is falling out of favor among the younger generation. Google is testing a new AI-powered automated system calling, which calls services (auto repair shops, dry cleaners, veterinary clinics) on behalf of the user. It is reported that nearly 90% of users eager to try this feature are young people aged 18 to 24. They try to avoid even phone conversations with others. All that remains is to introduce bots to respond to bots’ requests, and humans in this chain will become redundant.
Returning to Thiel’s “prophecy,” it is worth noting that he clearly supports the US administration’s course toward lifting restrictions on AI development and most likely knew in advance that the White House would adopt this strategy. Yet it’s worth paying attention to the fact that he has branded all those calling for AI regulation as Antichrist’s accomplices. For him, this is not merely a question of profit but a religious war. And this is not just an opinion — it is the worldview of an intellectual billionaire who successfully lobbied for his protégé, JD Vance, to be appointed as Vice President of the United States, evidently aiming his candidature for the presidency in 2028, after Trump steps down.
It is possible that Thiel has outlined the future dividing line of the world, with the USA on one side and the EU and China on the other. But in this confrontation, humans themselves are conspicuously absent. For some reason, the pursuit of digital technological progress does not seem to include the idea of human development. Thiel does not consider this possibility at all. And he is not alone. This is the consensus of the global elite.
Meanwhile, it is precisely the development of Man — the unfolding of his inner potential — that remains the only alternative to both Armageddon and the Techno-Antichrist, whether AI-driven or not. Otherwise, humanity is doomed.
This is a translation of the article first published in the Essence of Time newspaper, issue 645.