Research shows that combining language models with traditional algorithms achieves better results than LLM alone, Google is introducing a tool for building AI applications without code, and OpenAI is tightening security standards. Here's a roundup of last week's AI highlights.
Highlights of the week:
- Hybrid AI systems combining LLM with classical algorithms outperform pure language models
- Google Cloud introduces Gemini API Expander for building AI apps without coding
- OpenAI published security standards for scaling models to human intelligence
- Microsoft releases tool for automatic detection of vulnerabilities in AI systems
- New study shows AI can predict risk of death with 75% accuracy
Hybrid AI systems outperform pure language models
Increasingly, researchers are finding that combining large language models with traditional algorithms provides better results than LLM alone. Hybrid systems use LLM for natural language processing and traditional algorithms for tasks requiring precise computation or manipulation of structured data. This approach shows superior performance in solving complex tasks such as mathematical reasoning and scientific simulations, where pure LLMs often fail. The move to hybrid architectures represents a significant shift in the design of AI systems.
Google Cloud launches Gemini API Expander
Google Cloud has introduced Gemini API Expander, a tool that enables businesses to build AI applications without the need for coding. The platform uses transfer learning to adapt Gemini's pre-trained models to specific business use cases. A new Prompt Shield feature provides protection against prompt injection attacks, while an updated RAG API improves document search accuracy. The company also announced the general availability of the Gemini 1.5 Flash model and the expansion of the context window to 2 million tokens, enabling the processing of large documents.
OpenAI strengthens security standards
OpenAI has published new security standards aimed at managing the risks associated with developing models capable of human-level intelligence. The framework includes mandatory assessment of model capabilities before during training, setting limits for autonomous replication, and establishing procedures for shutting down highly advanced systems. The company also announced the creation of a Safety Advisory Council comprised of external experts to oversee the implementation of the policy. The move reflects growing concerns about the potential risks of superintelligent AI systems.
Microsoft releases AI vulnerability detection tool
Microsoft has introduced a new security tool capable of automatically identifying vulnerabilities in AI systems. The tool uses a combination of static code analysis and dynamic testing to uncover security gaps in AI applications. The tool is able to detect common issues such as prompt injection, data poisoning and model inversion attacks. The company also published a set of best practices for securing AI applications, including recommendations for controlling access to models and monitoring for unusual behavior.
AI predicts mortality with 75% accuracy
A new study published in Nature shows that an AI model trained on routine medical records can predict the risk of death with 75% accuracy. The system analyses data such as blood pressure, cholesterol levels and lifestyle without access to explicit diagnoses. The model outperformed traditional predictive tools used in clinical practice and was able to identify high-risk patients months before potential health complications. This approach could revolutionize preventive medicine, although it raises ethical questions about privacy and the use of sensitive data.
The Batch - DeepLearning.Ai by Andrew Ng / gnews.cz - GH