Explore how vision-language-action models like Helix, GR00T N1, and RT-1 are enabling robots to understand instructions and act autonomously.
Just when you thought the pace of change of AI models couldn’t get any faster, it accelerates yet again. In the popular news media, the introduction of DeepSeek in January 2025 created a moment that ...
Figure AI has unveiled HELIX, a pioneering Vision-Language-Action (VLA) model that integrates vision, language comprehension, and action execution into a single neural network. This innovation allows ...
As I highlighted in my last article, two decades after the DARPA Grand Challenge, the autonomous vehicle (AV) industry is still waiting for breakthroughs—particularly in addressing the “long tail ...
A preprint research paper has introduced a new benchmark called SUPERGLASSES that evaluates how well vision-language models ...
Chinese AI startup Zhipu AI aka Z.ai has released its GLM-4.6V series, a new generation of open-source vision-language models (VLMs) optimized for multimodal reasoning, frontend automation, and ...
Microsoft’s Phi-4-reasoning-vision-15B model shows how compact AI systems can combine vision and reasoning, signalling a broader industry move towards efficiency rather than simply building ever ...