Type: Web Article Original link: https://x.com/karpathy/status/1938626382248149433?s=43&t=ANuJI-IuN5rdsaLueycEbA Publication date: 2025-09-04
Summary #
WHAT - The article discusses the competition to develop a “cognitive core” based on large language models (LLM) with a few billion parameters, designed to be multimodal and always active on every computer as the core of LLM-based personal computing.
WHY - This article is relevant for AI business because it illustrates an emerging trend towards lighter and more capable LLM models, which could revolutionize the way artificial intelligence is integrated into personal devices, offering new market opportunities and improvements in the cognitive capabilities of AI applications.
WHO - The main players are researchers and tech companies developing advanced LLM models, with a particular focus on Andrey Karpathy, an influential researcher in the field of AI.
WHERE - This article is positioned within the context of the competition for innovation in the field of large language models, with a specific focus on personal computing and multimodal integration.
WHEN - The discussion is current and reflects an emerging trend in the AI sector, with a potential significant impact in the coming years.
BUSINESS IMPACT:
- Opportunities: Developing lightweight and multimodal LLM models for personal computing can open new markets and improve AI integration in personal devices.
- Risks: The competition is intense, and other companies might develop similar or superior solutions.
- Integration: These models can be integrated into the existing stack to enhance the cognitive capabilities of AI applications.
TECHNICAL SUMMARY:
- Core technology stack: Large language models (LLM) with a few billion parameters, designed to be multimodal.
- Scalability: These models are designed to be lightweight and always active, making them scalable for use on personal devices.
- Technical differentiators: The ability to be multimodal and always active, sacrificing encyclopedic knowledge for greater cognitive capability.
Use Cases #
- Private AI Stack: Integration into proprietary pipelines
- Client Solutions: Implementation for client projects
- Strategic Intelligence: Input for technological roadmaps
- Competitive Analysis: Monitoring AI ecosystem
Resources #
Original Links #
- The race for LLM “cognitive core” - Original link
Article recommended and selected by the Human Technology eXcellence team, processed through artificial intelligence (in this case with LLM HTX-EU-Mistral3.1Small) on 2025-09-04 19:28 Original source: https://x.com/karpathy/status/1938626382248149433?s=43&t=ANuJI-IuN5rdsaLueycEbA
The HTX Take #
This topic is at the heart of what we build at HTX. The technology discussed here — whether it’s about AI agents, language models, or document processing — represents exactly the kind of capability that European businesses need, but deployed on their own terms.
The challenge isn’t whether this technology works. It does. The challenge is deploying it without sending your company data to US servers, without violating GDPR, and without creating vendor dependencies you can’t escape.
That’s why we built ORCA — a private enterprise chatbot that brings these capabilities to your infrastructure. Same power as ChatGPT, but your data never leaves your perimeter. No per-user pricing, no data leakage, no compliance headaches.
Want to see how ready your company is for AI? Take our free AI Readiness Assessment — 5 minutes, personalized report, actionable roadmap.
Related Articles #
- Nice - my AI startup school talk is now up! - LLM, AI
- Nice - my AI startup school talk is now up! Chapters: 0:00 Imo fair to say that software is changing quite fundamentally again - LLM, AI
- Huge AI market opportunity in 2025 - AI, Foundation Model
FAQ
Can large language models run on private infrastructure?
Yes. Open-source models like LLaMA, Mistral, DeepSeek, and Qwen can run on-premise or on European cloud. These models achieve performance comparable to GPT-4 for most business tasks, with the advantage of complete data sovereignty. HTX's PRISMA stack is designed to deploy these models for European SMEs.
Which LLM is best for business use?
The best model depends on your use case. For document analysis and chat, models like Mistral and LLaMA excel. For data analysis, DeepSeek offers strong reasoning. HTX's approach is model-agnostic: ORCA supports multiple models so you can choose the best fit without vendor lock-in.