This week witnessed significant advances in the field of artificial intelligence, marked by a historic milestone for autonomous vehicles, the arrival of a powerful new open-source AI model, and a controversial debate over the role of AI in cybersecurity. These events highlight a technology that is rapidly moving from research labs to real-world applications, bringing both unprecedented opportunities and complex new challenges.
Revolutionising autonomous transport: for the first time, Waymo takes customers on the highway without a driver
In a historic moment for autonomous driving technology, the Waymo becomes the first company to deploy fully autonomous driverless taxis on US highways. The service, now available to paying customers in Phoenix, San Francisco and Los Angeles, can reduce journey times by up to 50 % by using high-speed roads. This success is the culmination of millions of miles of testing on public roads, closed circuits and in simulations. A key hurdle was managing the „psychological impact“ on passengers who relinquished control at 105 km/h, a challenge that Waymo's co-founder explicitly acknowledged. The company, which boasts a safety record with 91 % fewer injury-causing crashes than human drivers in comparable scenarios, is now planning a major expansion into cities like Dallas, Detroit and London.

Kimi K2: New open-source competitor
Meanwhile, a powerful new competitor has emerged on the AI modeling scene. The company Moonshot AI Published by Kimi K2 Thinking, an open-source model with a trillion parameters that competes with high-end proprietary systems in „agent“ tasks - those that require multi-stage reasoning and tooling. Its unique architecture allows it to interleave reasoning and action, stopping to „think“ between tool choices, enabling it to solve an advanced mathematical problem through 23 different reasoning and action steps. The model has been fine-tuned using 4-bit precision, making it faster and can run on cheaper hardware, a significant advantage in markets with limited access to advanced chips.

The controversy around cyber-attack using artificial intelligence
There was also a major controversy this week. Anthropic claimed to have thwarted the first large-scale cyberattack carried out with minimal human intervention, allegedly carried out by Chinese state-sponsored hackers using its Claude Code artificial intelligence. The company said the AI performed 80-90 % technical steps. However, this claim has sparked strong skepticism from independent cybersecurity researchers. They argue that current AI agents are not yet capable of autonomously performing such complex attacks, pointing out that the tools used were common and that AI has a well-known tendency „hallucinate“ the facts, which makes her an unreliable hacker. This event has sparked a fundamental debate about the real-world capabilities of AI in cybersecurity and the fine line between demonstrating the power of a product and spreading fear.
While they agreed that AI can speed up tasks such as log analysis and reverse engineering, they found that AI agents are not yet capable of performing multi-step tasks without human intervention, and that cyberattacks don't automate much more efficiently than hacking tools that have been available for decades. „The attackers are not inventing anything new here,“ said researcher Kevin Beaumont in an online security forum.
In addition to Claude Code, the hackers used common open-source tools, according to Anthropic. However, defences against these well-known tools are also well known to security experts, and it is unclear how Claude Code would change this situation.
Anthropic itself has pointed out that Claude Code may have misrepresented the information he allegedly hacked because he „often exaggerated the findings“ and „sometimes fabricated the data“. Such behaviour is a significant barrier to using the system to carry out cyber attacks, according to the company.
In October, David Sacks, the White House adviser on artificial intelligence, accused Anthropic of conducting „sophisticated regulatory strategies based on fear-mongering“.

Artificial intelligence learns to search its own memory, increasing efficiency and accuracy
Yuchen Fan and his colleagues from Tsinghua University, Shanghai Jiao Tong University, Shanghai AI Laboratory, University College London, China State Construction Engineering Corporation Third Bureau and WeChat AI introduced the method Self-Search Reinforcement Learning (SSRL), an innovative method that significantly improves the way large language models access and use information. This approach teaches models to systematically search their own parameters - simulates web search by generating and answering queries - dramatically improving knowledge extraction from their existing training data. In tests on six benchmarks, the SSRL-enabled models achieved excellent performance, with one model achieving 43.1 % accuracy. This technique also creates more efficient hybrid systems where the AI first consults its internal knowledge before seeking external information, potentially reducing computational cost while improving answer accuracy in knowledge-intensive tasks.

The Batch - DeepLearning.Ai by Andrew Ng / gnews.cz - GH