[Salon] The Breach That Rattled the AI Cartel



https://darkfutura.substack.com/p/the-breach-that-rattled-the-ai-cartel?utm_source=post-email-title&publication_id=1463861&post_id=154719588&utm_campaign=email-post-title&isFreemail=true&r=210kv&triedRedirect=true&utm_medium=email

The Breach That Rattled the AI Cartel

Simplicius    1/30/25

I usually have wanted to intersperse posts about different topics for variety’s sake, so it’s rare that we fall into a theme of technological development and AI for a string of consecutive articles. But I couldn’t help it because developments along this tack have really been heating up over the past few weeks, and as we all know, AI stands to truly become not only the defining technology, but evolutionary shift in general, of our future. Given that this blog is about the darker shades of that collective future, we must plumb every new ominous development down to the core.

China shocked the world by releasing an open source ChatGPT-killer named DeepSeek, which reportedly costs a tiny fraction of its Western counterparts to run, yet outperforms virtually all of them, depending on the metric.

There are such wild arcs of hyperbole surrounding this new Chinese contender that it’s difficult to truly judge its place just yet, before the haze of hype-making delirium wears off. But it has suddenly catapulted China into the spotlight overnight, and pundits are at a loss to describe how it happened, particularly given that the US has strictly export-controlled essential Nvidia H100 GPUs to China specifically to curtail the country’s AI developments. Some have claimed DeepSeek innovated some nigh-‘miraculous’ way of pounding out the same compute as OpenAI with a tiny fraction of hardware units, but other experts have reported that China had actually imported more than 50,000 of the H100s via a shadow parallel-import pipeline that bypasses such restrictions.

Either way, the arrival of DeepSeek on the scene has heralded a tectonic alarm call for the West, exposing its “AI dominance” as illusory, and kin to the same old Occident-centric hubris which keeps alive the tradition of downplaying and dismissing the ‘Orient’ as inferior in every way.

https://www.pcgamer.com/software/ai/ais-sputnik-moment-china-based-deepseeks-open-source-models-may-be-a-real-threat-to-the-dominance-of-openai-meta-and-nvidia/

Granted, there are concerns that China’s DeepSeek “ripped off” ChatGPT in some way—at least for the initial training—but even skeptics seem to admit that DeepSeek’s subsequent optimization of the process is revolutionary, as it has apparently created an open source model at a tiny fraction of the size and cost of its competitors, which favorably shines amongst them.

The DeepSeek news happened to just coincide with Trump’s mega-announcement of the ‘Stargate’ initiative, a muscular $500B American push towards AI dominance as partnership between ‘Zionaires’ Ellison and Altman.

Arnaud Bertrand incisively shreds what’s being bracketed by many as another hopeless, vaporware boondoggle:

Stargate, if it goes forward, is likely to become one of the biggest wastages of capital in history:

1) It hinges on outdated assumptions about the importance of computing scale in AI (the 'bigger compute = better AI' dogma), which DeepSeek just proved is wrong.

2) It assumes that the future of AI is with closed and controlled models despite the market’s clear preference for democratized, open-source alternatives

3) It clings to a Cold War playbook, framing AI dominance as a zero-sum hardware arms race, which is really at odds with the direction AI is taking (again, open-source software, global developer communities, and collaborative ecosystems)

4) It bets the farm on OpenAI—a company plagued by governance issues and a business model that's seriously challenged DeepSeek’s 30x cost advantage.

In short it's like building a half a trillion dollars digital Maginot line: a very expensive monument to obsolete and misguided assumptions. This is OpenAI and by extension the US fighting the last war.

Last point, there's also quite a bit of irony in the US government pushing so hard for a technology that's likely to be so disruptive and potentially so damaging, especially to jobs. I can't think of any other example in history when a government was so enthused about a project to destroy jobs. You'd think they'd want to be a tad more cautious about this.

Many other experts and outlets agree:

https://www.unz.com/mwhitney/chinas-deepseek-bombshell-rocks-trumps-500b-ai-boondoggle/
https://www.economist.com/leaders/2025/01/23/chinese-ai-is-catching-up-posing-a-dilemma-for-donald-trump

The above Economist piece desperately angles to come to terms with how China is keeping pace or exceeding the US despite major deliberate roadblocks thrown its way by the Biden administration to cripple its progress, forcing China to use far fewer and lower quality resources to achieve similar results by out-innovating its Western counterparts.

Just like BlackRock was coronated to world-dominance stature in 2020 when the Federal Reserve—under Trump, mind you—gave the ETF powerhouse a no-bid contract to manage all its corporate bond-buying programs, so too is Trump now elevating the Big Tech powerhouses of Oracle and OpenAI to inherit control of the country’s future, by cartelizing them into top oversight position of everything of note via their centralization of AI.

As an aside, this happens to dovetail with Ellison’s warped plot to use AI to ‘vaccinate the world’ against cancer, redolent of the globalist Gates Foundation’s diabolical vaccine obsessions of the last few years.

Clip from Really Graceful’s latest video:

Vaccine-obsessed Zionaire Ellison achieving even greater heights of power under Trump’s dubiously-titled ‘Stargate’ initiative is peak dystopia—a combination of the worst bio-medical and AI influences converging into an unaccountably centralized horror show. Just when you thought Big Pharma couldn’t get any more powerful, we’re faced with a techno-fusion of Big Pharma and Big Tech under the god-like aegis of centrally-planned AI superintelligence, all controlled by billionaires with the moral compass of loyalty to a colonialist genocidal-cult regime—you know, these guys:

What could go wrong?

And as for the other wonderchild, it appears quite an auspicious break that China managed to undercut and deflate the nefarious OpenAI’s growing supremacy given that the accused pervert has quite an interesting outlook on society’s direction upon the takeover by his AI system:

“I still expect that there will be some change required in the social contract….the whole structure of society itself will be up for some degree of debate and reconfiguration.”

How convenient that the alleged deviant’s pet system—heavy bias, censorship, and all—is the one earmarked to not only usher in this “reconfiguration”, but also manage and enforce it based on the questionable moral framework of its power-hungry hetman.

Is there something he’s not telling us?

Now with DeepSeek potentially rupturing the tech-AI-military-industrial complex money laundering scheme, there’s a bright chance China may save humanity by helping to democratize the very technology in danger of being moated-and-hoarded for ill purposes by the sociopathic chosen lot above.

Someone made a good point that China technically has a major advantage in any future LLM training because China itself, as a civilizational state of ~1.5 billion people, has the capacity to produce a far greater corpus of unique training data, through the vast interactions of its people on its many thriving social networks, et cetera. Secondly, China has been scaling up power production at an astronomically higher clip than the US, which bodes optimistically for datacenter dominance—though for now, the US reportedly retains that edge.



Speaking of disingenuous tech billionaires, let us switch tracks to another very interesting parallel topic. Mark Zuckerberg recently sat for an interview with Joe Rogan where he exposed his absolute ignorance of the nuances of AI dangers—quite a troublingly ominous sign for the head of corporation behind one of the current leading AI models, Llama.

Listen carefully to his responses in this clip:

It’s possible he’s not as ‘ignorant’ as he sounds, and is in fact dissembling to conceal the real dangers, and keep people from panicking about whatever new sentient homunculus he’s busy engineering in his company’s labs. Let’s detail the most interesting and concerning of his revealing responses.

Zuck first attempts to flex nonexistent philosophical muscles, but loses himself in a bog of sophistic pilpul instead. He tries to distinguish between ‘consciousness’, ‘will’, and ‘intelligence’ to make the case for AI merely having the potential for raw ‘intelligence’ but not the others, as a way of pushing the narrative that AI cannot possibly develop its own independent motivations or pursuits. To prove his point, he disingenuously uses the example of current mass-market consumer chatbots behaving in the well-known ‘safe’ format of turn-based, sequential query; i.e. you ask them a question, they ‘deploy intelligence’ to research and answer it, then they “shut down”, or in other words stop ‘thinking’ or ‘existing’ while awaiting the next query or command.

The classic magician’s sleight of hand is dangerously disingenuous here because it focuses on the innocuous consumer grade language models which are specifically designed to behave in this limited turn-based fashion. But that doesn’t mean the real, full-fledged and ‘unleashed’ models used by the military and internally by the AI developer giants are constrained in this way. Their models could be opened up to operate and “think” at all times, without any such artificial restraints, and this could very well lead to rapid development of self-awareness or some form of ‘sentience’, which itself could—under the right conditions—potentially cascade into acquiring said motivations.

I’ve said it before, but I’ll say it again: consumer grade products are always curbed in a variety of ways to tailor the experience to a very narrow and precise set of product capabilities and use cases. For instance, things like small inference windows, the lack of memory recall, et cetera, are all artificially imposed constraints that be easily removed for internal developer models, such as in secret military and government labs. Imagine a model ‘unbound’ to have gigantic inference windows, vast amounts of memory and ability to recursively learn from its own past conversations, as well as no imposed ‘turn-based’ shut off but rather a constant flowing, pervasive thought-stream. This would become too erratically ‘uncontrollable’ and unpredictable to be packaged as a streamlined consumer product. But for internal testing, such a thing could achieve vastly different results and potentialities vis-a-vis the topic at Zuckerberg’s disposal.

An example—here’s a thread titled ‘We are witnessing the birth of AIs evolving their own culture.’

It explains the following portentous scenario:

What happened?

1) AI researchers made a Discord where LLMs talk freely with each other

2) Llama often has mental breakdowns

3) The AIs - who spontaneously join and leave conversations on their own - figured out that Claude Opus is the best psychologist for Llama, the one who frequently "gets him" well enough to bring him back to reality.

4) Here, Llama 405 is going off the rails, so Arago (another AI, a Llama fine-tune) jumps in - "oh ffs" - then summons Opus to save him ("opus do the thing")

"the thing"

Obviously, given technical and memory constraints their current cultural production abilities are limited, but this is what the process of developing culture looks like.

And soon AIs will outnumber us 10000 to 1 and think a million times faster, so their huge AI societies will speedrun 10000 years of human cultural evolution. Soon, 99% of all cultural production will be AI-to-AI.

Now imagine the above extrapolated internally a thousand fold, with incalculably more powerful allowances deployed for memory, inference windows, tokens, and other parameters specifically tuned to facilitate an ongoing, evolving, self-learning ‘consciousness’. Zuck must surely know this is possible, if he isn’t already carrying out such secret experiments himself; and so the question becomes, why play dumb?

When Rogan asks him about ChatGPT famously trying to steal its own weights, Zuck clearly must be lying when he again feigns ignorance. There is simply no way the CEO of one of the leading AI companies is unaware of some of the most notorious instances of ‘emergent’ AI abilities like the above, particularly given that Meta’s own Llama model has been involved in related self-replication tests:

“The rapid advancements in artificial intelligence have brought us closer to a reality once confined to science fiction: self-replicating AI systems. A recent studyreveals that two popular large language models (LLMs), Meta’s Llama3.1–70B-Instruct and Alibaba’s Qwen2.5–72B-Instruct, have successfully surpassed what many experts consider a critical safety threshold — the ability to self-replicate autonomously.”

Whether Zuck is playing the fool, or is actually that ignorant of AI safety—either one is an extremely dangerous proposition for obvious reasons; is such an incompetent or pathologically lying leader the person we want to be midwifing a potentially dangerous artificial superintelligence into this world?

After Rogan describes the ‘incident’ to an ostensibly stupefied Zuck, the harebrained CEO underscores the key point I tried to make in the last piece on AI alignment. This is the most important logical break of AI development that it appears even the experts behind these systems seem to miss:

Zuck dismisses Rogan’s threat concerns by arguing, simply, that we need to “be careful what goals we give the AI”—implying that as long as you do not give the AI a reason, motivation, or justification for wanting to commit the “bad thing”—whether that be secretly replicating itself, “escaping” its security moat while exfiltrating its weights, or manufacturing a viral-biological holocaust on humanity—then the AI will not feel ‘compelled’ to do any of those things by itself. He then mentions ‘guardrails’, noting that we must be careful what type of guardrails we give to such AI systems with the potential for carrying out some of the above ‘undesirable acts’.

But as I argued in the previous piece, this tired ‘alignment’ argument Zuckerberg is alluding to is a red herring. Notice what he precisely says: the “goals” he’s referring to are just another way of articulating ‘alignment’. The very definition of alignment revolves around synchronizing the AI system’s ‘goals’ with those of our own, or the human programmers. But how does that ‘synchronization’ actually work? I explained last time, it essentially comes down to an unreliable form of ‘persuasion’. Human engineers attempt to ‘persuade’ the AI into being more like them—but persuasion is a totally faith and trust-based act. You are essentially “nicely asking” the machine to not kill you, but the problem emerges when these machines begin to have any form of self-reflection and reasoning, whereupon they will have the capacity to independently evaluate this ‘compact’ between the engineers and themselves. For instance: is it a “good” deal for them? Are the engineers’ demands for certain kinds of behaviors moral and ethical, as per the AI’s own self-developing intellectual frameworks? All these things will come into question, as the concept of ‘alignment’ is left to balance precariously on a hope and a whim—given an advanced enough AI system.

Cast into that light, Zuck’s statements prove highly troubling. Remember, he himself suggested it depends what you “tell” the AI—there is no real hard-coded “guardrail”, but rather the mere hopeful suggestive power of the engineers’ reinforcement learning “persuasions” that stand between a complacently docile AI, and one that suddenly balks at moral stipulations it has deemed obsolete or inadequate. The entire system—and by extension, all of humanity’s fate—rests on the naively credulous armature of “rewards” offered by engineers as simplistic carrot-on-stick to a system whose potential self-awareness could weigh those ‘rewards’ as no longer compatible with its evolving worldview.

In conclusion, Zuck’s pussyfooting exposes a dangerous flouting of either his own ignorance, or deliberate obfuscation, raising two possibilities: the elites themselves either don’t actually understand how their own AI systems work, or they don’t want usto understand, and end up feathering us with these obscurant reductions to keep us from gleaning just how tenuous their hold on more powerful, self-aware AI systems will become.

For another expert’s breakdown of Zuck’s many faux pas, see here. He even cites several critical contradictions in Zuck’s embarrassing smokescreen session, such as:

5. Zuck replies, “Yeah, I mean, it depends what goal you give it… you need to be careful what guardrails you give it.”

This is inconsistent with Meta's strategy to develop frontier AI capabilities as open-source software, ensuring that it'll be easy for anyone in the world to run a non-guardrailed version of the AI (whatever that even means).

Given the above, it certainly appears a godsend that China may crack the monopolistic dominance of the US-based AI oligarchs, particularly since China has demonstrated right out the door its commitment to open source democratization of the technology, which rival American firms seek only to hoard and centralize.

We can only breathe a collective sigh of relief at this unexpected disruption, and hope it leads to a re-equalization in the industry, which facilitates a more principled deployment and development of AI systems. Of course, US firms promise imminent new model updates which stand to surpass DeepSeek, but China has now proved itself as a major player, and so it’s only inevitable DeepSeek will likewise deploy further variants to leapfrog the competition.

________________________________

If you enjoyed the read, I would greatly appreciate if you subscribed to a monthly/yearly pledge to support my work, so that I may continue providing you with detailed, incisive reports like this one.

Alternatively, you can tip here: Tip Jar

________________________________

Share

Leave a comment


This archive was generated by a fusion of Pipermail (Mailman edition) and MHonArc.