Skip to main content
  1. Blog/

GitHub - Search code, repositories, users, issues, pull requests...: Apple Silicon (MLX) port of Karpathy's autoresearch — autonomous AI research loops on Mac, no PyTorch

·1079 words·6 mins
Articoli AI Machine Learning Software Development Code Review
Articoli Interessanti - This article is part of a series.
Part : This Article
Part : How to Build an Agent - Amp **Introduction** Building an agent, especially one that leverages the power of Amp, involves several key steps. Amp, which stands for Advanced Multi-Purpose Protocol, is a versatile framework designed to enhance the capabilities of agents in various domains. This guide will walk you through the process of creating an agent using Amp, from conceptualization to deployment. **1. Define the Purpose and Scope** Before diving into the technical details, it's crucial to define the purpose and scope of your agent. Ask yourself the following questions: - What specific tasks will the agent perform? - In what environments will the agent operate? - What are the key performance metrics for success? **2. Choose the Right Tools and Technologies** Selecting the appropriate tools and technologies is essential for building a robust agent. For an Amp-based agent, you might need: - **Programming Languages**: Python, Java, or C++ are commonly used. - **Development Frameworks**: TensorFlow, PyTorch, or custom frameworks compatible with Amp. - **Data Sources**: APIs, databases, or real-time data streams. - **Communication Protocols**: HTTP, WebSockets, or other protocols supported by Amp. **3. Design the Agent Architecture** The architecture of your agent will determine its efficiency and scalability. Consider the following components: - **Input Layer**: Handles data ingestion from various sources. - **Processing Layer**: Processes the data using algorithms and models. - **Output Layer**: Delivers the results to the end-users or other systems. - **Feedback Loop**: Allows the agent to learn and improve over time. **4. Develop the Core Functionality** With the architecture in place, start developing the core functionality of your agent. This includes: - **Data Ingestion**: Implementing mechanisms to collect and preprocess data. - **Algorithm Development**: Creating or integrating algorithms that will drive the agent's decision-making. - **Model Training**: Training machine learning models if applicable. - **Integration**: Ensuring seamless integration with other systems and protocols. **5. Implement Amp Protocols** Integrate Amp protocols into your agent to leverage its advanced capabilities. This might involve: - **Protocol Implementation**: Writing code to adhere to Amp standards. - **Communication**: Ensuring the agent can communicate effectively with other Amp-compatible systems. - **Security**: Implementing security measures to protect data and communications. **6. Testing and Validation** Thoroughly test
Part : Everything as Code: How We Manage Our Company In One Monorepo At Kasava, we've embraced the concept of "everything as code" to streamline our operations and ensure consistency across our projects. This approach allows us to manage our entire company within a single monorepo, providing a unified source of truth for all our configurations, infrastructure, and applications. **Why a Monorepo?** A monorepo offers several advantages: 1. **Unified Configuration**: All our settings, from development environments to production, are stored in one place. This makes it easier to maintain consistency and reduces the risk of configuration drift. 2. **Simplified Dependency Management**: With all our code in one repository, managing dependencies becomes more straightforward. We can easily track which versions of libraries and tools are being used across different projects. 3. **Enhanced Collaboration**: A single repository fosters better collaboration among team members. Everyone has access to the same codebase, making it easier to share knowledge and work together on projects. 4. **Consistent Build and Deployment Processes**: By standardizing our build and deployment processes, we ensure that all our applications follow the same best practices. This leads to more reliable and predictable deployments. **Our Monorepo Structure** Our monorepo is organized into several key directories: - **/config**: Contains all configuration files for various environments, including development, staging, and production. - **/infrastructure**: Houses the infrastructure as code (IaC) scripts for provisioning and managing our cloud resources. - **/apps**: Includes all our applications, both internal tools and customer-facing products. - **/lib**: Stores reusable libraries and modules that can be shared across different projects. - **/scripts**: Contains utility scripts for automating various tasks, such as data migrations and backups. **Tools and Technologies** To manage our monorepo effectively, we use a combination of tools and technologies: - **Version Control**: Git is our primary version control system, and we use GitHub for hosting our repositories. - **Continuous Integration/Continuous Deployment (CI/CD)**: We employ Jenkins for automating our build, test, and deployment processes. - **Infrastructure as Code (IaC)**: Terraform is our tool of choice for managing cloud infrastructure. - **Configuration Management**: Ansible is used for configuring and managing our servers and applications. - **Monitoring and Logging**: We use Prometheus and Grafana for monitoring,
Part : Introduction to the MCP Toolbox for Databases The MCP Toolbox for Databases is a comprehensive suite of tools designed to facilitate the management, optimization, and maintenance of databases. This toolbox is tailored to support a wide range of database management systems (DBMS), ensuring compatibility and efficiency across various platforms. Whether you are a database administrator, developer, or analyst, the MCP Toolbox provides a robust set of features to streamline your workflow and enhance productivity. Key Features: 1. **Database Management**: Easily create, modify, and delete databases and tables. The toolbox offers intuitive interfaces and powerful scripting capabilities to manage database schemas and objects efficiently. 2. **Performance Optimization**: Identify and resolve performance bottlenecks with advanced diagnostic tools. The MCP Toolbox includes performance monitoring and tuning features to ensure your databases run smoothly and efficiently. 3. **Backup and Recovery**: Implement reliable backup and recovery solutions to safeguard your data. The toolbox provides automated backup schedules and comprehensive recovery options to protect against data loss. 4. **Security Management**: Enhance database security with robust access control and encryption features. The MCP Toolbox helps you manage user permissions, audit logs, and secure data transmission. 5. **Data Integration**: Seamlessly integrate data from multiple sources and formats. The toolbox supports various data integration techniques, including ETL (Extract, Transform, Load) processes, to consolidate and analyze data effectively. 6. **Reporting and Analytics**: Generate insightful reports and perform in-depth data analysis. The MCP Toolbox offers advanced reporting tools and analytics capabilities to derive actionable insights from your data. 7. **Cross-Platform Compatibility**: Ensure compatibility with multiple DBMS platforms, including popular systems like Oracle, SQL Server, MySQL, and PostgreSQL. The toolbox is designed to work seamlessly across different environments. 8. **User-Friendly Interface**: Benefit from an intuitive and user-friendly interface that simplifies complex database tasks. The MCP Toolbox is designed with ease of use in mind, making it accessible to both novice and experienced users. The MCP Toolbox for Databases is an essential tool for anyone involved in database management. Its comprehensive features and cross-platform compatibility make it a valuable asset for optimizing database performance, ensuring data security, and enhancing overall productivity.
Featured image
#### Source

Type: Content via X Original link: https://x.com/trevinpeterson/status/2030611877198221458?s=43&t=ANuJI-IuN5rdsaLueycEbA Publication date: 2026-03-23


Summary
#

Introduction
#

Innovation in the field of artificial intelligence and machine learning continues to amaze, and a recent example is the “Apple Silicon (MLX) port of Karpathy’s autoresearch” project. This project, available on GitHub, represents an important evolution in autonomous research in the AI field, allowing autonomous research loops to be run directly on Apple Silicon machines, without the need for PyTorch. This tool was shared on X with the intention of highlighting how new hardware architectures can significantly influence AI research results, offering new opportunities to optimize training processes.

The project was developed to make the most of the capabilities of Apple Silicon CPUs, offering an effective alternative to PyTorch and CUDA-based solutions. This makes the project particularly interesting for developers and researchers working on Apple machines, allowing them to run machine learning experiments more efficiently and autonomously.

What It Offers / What It Is About
#

The “Apple Silicon (MLX) port of Karpathy’s autoresearch” project is a port of Andrej Karpathy’s original autoresearch, adapted to run natively on Apple Silicon hardware. This means that there is no need to use PyTorch or CUDA, making the training process simpler and more accessible. The project maintains the same basic rules as the original: a modifiable train.py file, an evaluation metric (val_bpb), a fixed training budget, and a keep-or-revert mechanism managed via Git.

The project includes several key components:

  • prepare.py: Manages data preparation, tokenizer, dataloader, and evaluation. This file should be considered fixed.
  • train.py: Contains the model, optimizer, and training loop. This is the file that the agent modifies.
  • program.md: Describes the protocol of the autonomous experiment.
  • results.tsv: Records the history of experiments.

The autonomous research loop works by modifying train.py, running an experiment with a fixed time budget, reading the val_bpb metric, keeping the changes if they improve the results, and reverting them otherwise. This process repeats continuously, allowing the model to be optimized autonomously.

Why It’s Relevant
#

Hardware-Specific Innovation
#

The project was shared on X to highlight how new hardware architectures can influence AI research results. In particular, the porting to Apple Silicon has shown that smaller and faster models can outperform larger ones simply because they can execute more optimization steps within the available time budget. This is a concrete example of how hardware can influence AI model design choices.

Efficiency and Accessibility
#

The absence of dependencies on PyTorch and CUDA makes the project particularly interesting for developers working on Apple machines. This allows machine learning experiments to be run more efficiently and autonomously, without the need for complex configurations or specialized hardware. Additionally, the project offers a practical example of how hardware-specific optimization can lead to significant improvements in AI research results.

Usage Context
#

The project is useful for researchers and developers who want to explore new hardware architectures and optimize their AI models autonomously. The discoveries made with this project can be applied in various contexts, such as improving the performance of machine learning models, optimizing training processes, and researching new hardware-specific solutions.

How to Use It / Deep Dive
#

To start using this project, you need a Mac with an Apple Silicon CPU and Python installed. Follow these steps:

  1. Install dependencies: Use uv to install the necessary dependencies. If uv is not already installed, you can install it with the command curl -LsSf [URL] | sh.
  2. Prepare data: Run uv run prepare.py to prepare the data and tokenizer.
  3. Run experiments: Run uv run train.py to start a one-minute training experiment.
  4. Automation: Point a coding agent like Claude Code to program.md and let it manage the autonomous research loop.

To deepen your understanding, you can consult the GitHub repository and explore the configuration files and experiment results. Additionally, you can consult the official uv documentation and other related resources for more information.

Final Thoughts
#

The “Apple Silicon (MLX) port of Karpathy’s autoresearch” project fits into a broader context of hardware-specific innovation in the field of machine learning. The discoveries made with this project highlight the importance of adapting AI models to the specific capabilities of the hardware, allowing for significant improvements in performance. This approach can be applied in various contexts, such as optimizing training processes and researching new hardware-specific solutions. Additionally, the project represents a concrete example of how hardware innovation can influence AI model design choices, opening new opportunities for research and development in the field of machine learning.


Use Cases
#

  • Private AI Stack: Integration into proprietary pipelines
  • Client Solutions: Implementation for client projects

Resources
#

Original Links #


Article reported and selected by the Human Technology eXcellence team, processed via artificial intelligence (in this case with LLM HTX-EU-Mistral3.1Small) on 2026-03-23 08:50 Original source: https://x.com/trevinpeterson/status/2030611877198221458?s=43&t=ANuJI-IuN5rdsaLueycEbA

Related Articles #


The HTX Take
#

This topic is at the heart of what we build at HTX. The technology discussed here — whether it’s about AI agents, language models, or document processing — represents exactly the kind of capability that European businesses need, but deployed on their own terms.

The challenge isn’t whether this technology works. It does. The challenge is deploying it without sending your company data to US servers, without violating GDPR, and without creating vendor dependencies you can’t escape.

That’s why we built ORCA — a private enterprise chatbot that brings these capabilities to your infrastructure. Same power as ChatGPT, but your data never leaves your perimeter. No per-user pricing, no data leakage, no compliance headaches.

Want to see how ready your company is for AI? Take our free AI Readiness Assessment — 5 minutes, personalized report, actionable roadmap.

Discover ORCA by HTX
Is your company ready for AI?
Take the free assessment →

FAQ

How is AI transforming European businesses?

AI is enabling businesses to automate document processing, enhance decision-making, and unlock insights from their data. However, European businesses face unique challenges: GDPR compliance, AI Act requirements, and data sovereignty concerns. Private AI solutions — like HTX's PRISMA stack — address all three while delivering the same capabilities as cloud AI.

What's the first step to adopting AI in my company?

Start with an AI readiness assessment to identify where AI can have the biggest impact. HTX offers a free 5-minute assessment at ht-x.com/assessment/ that evaluates your digital maturity, identifies high-impact opportunities, and provides a personalized roadmap.

Articoli Interessanti - This article is part of a series.
Part : This Article
Part : How to Build an Agent - Amp **Introduction** Building an agent, especially one that leverages the power of Amp, involves several key steps. Amp, which stands for Advanced Multi-Purpose Protocol, is a versatile framework designed to enhance the capabilities of agents in various domains. This guide will walk you through the process of creating an agent using Amp, from conceptualization to deployment. **1. Define the Purpose and Scope** Before diving into the technical details, it's crucial to define the purpose and scope of your agent. Ask yourself the following questions: - What specific tasks will the agent perform? - In what environments will the agent operate? - What are the key performance metrics for success? **2. Choose the Right Tools and Technologies** Selecting the appropriate tools and technologies is essential for building a robust agent. For an Amp-based agent, you might need: - **Programming Languages**: Python, Java, or C++ are commonly used. - **Development Frameworks**: TensorFlow, PyTorch, or custom frameworks compatible with Amp. - **Data Sources**: APIs, databases, or real-time data streams. - **Communication Protocols**: HTTP, WebSockets, or other protocols supported by Amp. **3. Design the Agent Architecture** The architecture of your agent will determine its efficiency and scalability. Consider the following components: - **Input Layer**: Handles data ingestion from various sources. - **Processing Layer**: Processes the data using algorithms and models. - **Output Layer**: Delivers the results to the end-users or other systems. - **Feedback Loop**: Allows the agent to learn and improve over time. **4. Develop the Core Functionality** With the architecture in place, start developing the core functionality of your agent. This includes: - **Data Ingestion**: Implementing mechanisms to collect and preprocess data. - **Algorithm Development**: Creating or integrating algorithms that will drive the agent's decision-making. - **Model Training**: Training machine learning models if applicable. - **Integration**: Ensuring seamless integration with other systems and protocols. **5. Implement Amp Protocols** Integrate Amp protocols into your agent to leverage its advanced capabilities. This might involve: - **Protocol Implementation**: Writing code to adhere to Amp standards. - **Communication**: Ensuring the agent can communicate effectively with other Amp-compatible systems. - **Security**: Implementing security measures to protect data and communications. **6. Testing and Validation** Thoroughly test
Part : Everything as Code: How We Manage Our Company In One Monorepo At Kasava, we've embraced the concept of "everything as code" to streamline our operations and ensure consistency across our projects. This approach allows us to manage our entire company within a single monorepo, providing a unified source of truth for all our configurations, infrastructure, and applications. **Why a Monorepo?** A monorepo offers several advantages: 1. **Unified Configuration**: All our settings, from development environments to production, are stored in one place. This makes it easier to maintain consistency and reduces the risk of configuration drift. 2. **Simplified Dependency Management**: With all our code in one repository, managing dependencies becomes more straightforward. We can easily track which versions of libraries and tools are being used across different projects. 3. **Enhanced Collaboration**: A single repository fosters better collaboration among team members. Everyone has access to the same codebase, making it easier to share knowledge and work together on projects. 4. **Consistent Build and Deployment Processes**: By standardizing our build and deployment processes, we ensure that all our applications follow the same best practices. This leads to more reliable and predictable deployments. **Our Monorepo Structure** Our monorepo is organized into several key directories: - **/config**: Contains all configuration files for various environments, including development, staging, and production. - **/infrastructure**: Houses the infrastructure as code (IaC) scripts for provisioning and managing our cloud resources. - **/apps**: Includes all our applications, both internal tools and customer-facing products. - **/lib**: Stores reusable libraries and modules that can be shared across different projects. - **/scripts**: Contains utility scripts for automating various tasks, such as data migrations and backups. **Tools and Technologies** To manage our monorepo effectively, we use a combination of tools and technologies: - **Version Control**: Git is our primary version control system, and we use GitHub for hosting our repositories. - **Continuous Integration/Continuous Deployment (CI/CD)**: We employ Jenkins for automating our build, test, and deployment processes. - **Infrastructure as Code (IaC)**: Terraform is our tool of choice for managing cloud infrastructure. - **Configuration Management**: Ansible is used for configuring and managing our servers and applications. - **Monitoring and Logging**: We use Prometheus and Grafana for monitoring,
Part : Introduction to the MCP Toolbox for Databases The MCP Toolbox for Databases is a comprehensive suite of tools designed to facilitate the management, optimization, and maintenance of databases. This toolbox is tailored to support a wide range of database management systems (DBMS), ensuring compatibility and efficiency across various platforms. Whether you are a database administrator, developer, or analyst, the MCP Toolbox provides a robust set of features to streamline your workflow and enhance productivity. Key Features: 1. **Database Management**: Easily create, modify, and delete databases and tables. The toolbox offers intuitive interfaces and powerful scripting capabilities to manage database schemas and objects efficiently. 2. **Performance Optimization**: Identify and resolve performance bottlenecks with advanced diagnostic tools. The MCP Toolbox includes performance monitoring and tuning features to ensure your databases run smoothly and efficiently. 3. **Backup and Recovery**: Implement reliable backup and recovery solutions to safeguard your data. The toolbox provides automated backup schedules and comprehensive recovery options to protect against data loss. 4. **Security Management**: Enhance database security with robust access control and encryption features. The MCP Toolbox helps you manage user permissions, audit logs, and secure data transmission. 5. **Data Integration**: Seamlessly integrate data from multiple sources and formats. The toolbox supports various data integration techniques, including ETL (Extract, Transform, Load) processes, to consolidate and analyze data effectively. 6. **Reporting and Analytics**: Generate insightful reports and perform in-depth data analysis. The MCP Toolbox offers advanced reporting tools and analytics capabilities to derive actionable insights from your data. 7. **Cross-Platform Compatibility**: Ensure compatibility with multiple DBMS platforms, including popular systems like Oracle, SQL Server, MySQL, and PostgreSQL. The toolbox is designed to work seamlessly across different environments. 8. **User-Friendly Interface**: Benefit from an intuitive and user-friendly interface that simplifies complex database tasks. The MCP Toolbox is designed with ease of use in mind, making it accessible to both novice and experienced users. The MCP Toolbox for Databases is an essential tool for anyone involved in database management. Its comprehensive features and cross-platform compatibility make it a valuable asset for optimizing database performance, ensuring data security, and enhancing overall productivity.