Type: Content Original link: https://x.com/varchasvee_/status/1986811191474401773?s=43&t=ANuJI-IuN5rdsaLueycEbA Publication date: 2025-11-12
Summary #
WHAT - A Twitter post discussing the deletion of tokenizers in Optical Character Recognition (OCR) models, based on a post by Andrej Karpathy.
WHY - Relevant for AI business because it suggests an innovative approach to improve the efficiency and accuracy of OCR models, eliminating the need for tokenization.
WHO - Andrej Karpathy (author of the original post), Varun Sharma (author of the tweet), AI developers and researchers community.
WHERE - Positioned within the technical debate on OCR and NLP, within the AI community on Twitter.
WHEN - The tweet was published on 2024-05-16, reflecting a current trend of innovation in OCR models.
BUSINESS IMPACT:
- Opportunities: Developing OCR models without tokenizers can reduce complexity and improve accuracy, offering a competitive advantage.
- Risks: The transition may require significant investments in research and development.
- Integration: Possible integration with existing OCR tools to test and validate the tokenizer-free approach.
TECHNICAL SUMMARY:
- Core technology stack: OCR models that read text directly from pixels, bypassing tokenization.
- Scalability and limits: Scalability depends on the model’s ability to handle different resolutions and text types. Limits include the need for large datasets for training.
- Technical differentiators: Elimination of tokenization, reduction of model complexity, potential improvement in accuracy.
Use Cases #
- Private AI Stack: Integration into proprietary pipelines
- Client Solutions: Implementation for client projects
- Strategic Intelligence: Input for technological roadmap
- Competitive Analysis: Monitoring AI ecosystem
Resources #
Original Links #
- said we should delete tokenizers - Original link
Article suggested and selected by the Human Technology eXcellence team, processed through artificial intelligence (in this case with LLM HTX-EU-Mistral3.1Small) on 2025-11-12 17:59 Original source: https://x.com/varchasvee_/status/1986811191474401773?s=43&t=ANuJI-IuN5rdsaLueycEbA
The HTX Take #
This topic is at the heart of what we build at HTX. The technology discussed here — whether it’s about AI agents, language models, or document processing — represents exactly the kind of capability that European businesses need, but deployed on their own terms.
The challenge isn’t whether this technology works. It does. The challenge is deploying it without sending your company data to US servers, without violating GDPR, and without creating vendor dependencies you can’t escape.
That’s why we built ORCA — a private enterprise chatbot that brings these capabilities to your infrastructure. Same power as ChatGPT, but your data never leaves your perimeter. No per-user pricing, no data leakage, no compliance headaches.
Want to see how ready your company is for AI? Take our free AI Readiness Assessment — 5 minutes, personalized report, actionable roadmap.
Related Articles #
- If you’re late to the whole “memory in AI agents” topic like me, I recommend investing 43 minutes to watch this video - AI, AI Agent
- We used DeepSeek OCR to extract every dataset from tables/charts ac… - AI
- "🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here" - Natural Language Processing, AI Agent, Foundation Model
FAQ
Can large language models run on private infrastructure?
Yes. Open-source models like LLaMA, Mistral, DeepSeek, and Qwen can run on-premise or on European cloud. These models achieve performance comparable to GPT-4 for most business tasks, with the advantage of complete data sovereignty. HTX's PRISMA stack is designed to deploy these models for European SMEs.
Which LLM is best for business use?
The best model depends on your use case. For document analysis and chat, models like Mistral and LLaMA excel. For data analysis, DeepSeek offers strong reasoning. HTX's approach is model-agnostic: ORCA supports multiple models so you can choose the best fit without vendor lock-in.