Articles for category: AI Tools

Rust’s Generic Associated Types: What is It?

A Bit of Insight Into Generic Associated Types (GATs) That name is so long! What the heck is this? Don’t worry, let’s break it down from the beginning. Let’s start by reviewing some of Rust’s syntax structure. What makes up a Rust program? The answer: items. Every Rust program is composed of individual items. For example, if you define a struct in main.rs, then add an impl block with two methods, and finally write a main function – these are three items within the module item of your crate. Now that we’ve covered items, let’s talk about associated items. Associated

Introducing Storage Regions on the HF Hub

As part of our Enterprise Hub plan, we recently released support for Storage Regions. Regions let you decide where your org’s models and datasets will be stored. This has two main benefits, which we’ll briefly go over in this blog post: Regulatory and legal compliance, and more generally, better digital sovereignty Performance (improved download and upload speeds and latency) Currently we support the following regions: US 🇺🇸 EU 🇪🇺 coming soon: Asia-Pacific 🌏 But first, let’s see how to setup this feature in your organization’s settings 🔥 Org settings If your organization is not an Enterprise Hub org yet, you

Python for Beginners: How to Go from Basics to Advanced in 2025

From Zero to Hero: A Step-by-Step Guide to Learning Python Python is one of the easiest and most powerful programming languages to learn. Whether you want to build websites, analyze data, or automate tasks, this step-by-step guide will take you from absolute beginner to confident coder. Why Learn Python? ✔ Beginner-friendly syntax (reads like English) ✔ Huge demand in AI, web dev, data science, and automation ✔ Massive community support (tons of free resources) Step 1: Learn Python Basics Variables & Data Types (integers, strings, lists) Conditionals & Loops (if, for, while) Functions & Modules (reusable code blocks) 📌 Example:

Make your llama generation time fly with AWS Inferentia2

Update (02/2024): Performance has improved even more! Check our updated benchmarks. In a previous post on the Hugging Face blog, we introduced AWS Inferentia2, the second-generation AWS Inferentia accelerator, and explained how you could use optimum-neuron to quickly deploy Hugging Face models for standard text and vision tasks on AWS Inferencia 2 instances. In a further step of integration with the AWS Neuron SDK, it is now possible to use 🤗 optimum-neuron to deploy LLM models for text generation on AWS Inferentia2. And what better model could we choose for that demonstration than Llama 2, one of the most popular

Warp: O Terminal com IA que Todo Desenvolvedor Precisa Conhecer

Um item essencial no arsenal de todo dev é o terminal. E, como muitos desenvolvedores, sou fã de personalizar ao máximo, deixando tudo “clean” e “coloridinho” pra dar aquele boost no FPS. O problema é que a personalização cmd do Windows não é tão amigável quanto o terminal do nosso amigo Linux. Tentei aplicar o Oh My Posh, mas, no final, ele trouxe mais problemas do que soluções (ou talvez eu tenha feito alguma besteira, vai saber 🤓). Foi nesse cenário de frustração e um pouco de “sem saber o que fazer” que, ao pesquisar, encontrei o Warp, um terminal

Our Year in Review · Explosion

While 2020 hasn’t been easy for anyone, at Explosion we’ve considered ourselves relatively fortunate in this most interesting year. We’ve always worked remotely, so we’ve been able to take both pride and comfort in continuing to ship good software. Here’s a look back at what we’ve been up to. 🔮 Jan 28: 2020 started with a big release: the alpha of Thinc v8.0, a lightweight deep learning library that offers an elegant, type-checked, functional-programming API for composing models, with support for layers defined in other frameworks such as PyTorch, TensorFlow or MXNet. Thinc was re-written from the ground up to

A Deep Dive into Roberta, Llama 2, and Mistral for Disaster Tweets Analysis with Lora

In the fast-moving world of Natural Language Processing (NLP), we often find ourselves comparing different language models to see which one works best for specific tasks. This blog post is all about comparing three models: RoBERTa, Mistral-7b, and Llama-2-7b. We used them to tackle a common problem – classifying tweets about disasters. It is important to note that Mistral and Llama 2 are large models with 7 billion parameters. In contrast, RoBERTa-large (355M parameters) is a relatively smaller model used as a baseline for the comparison study. In this blog, we used PEFT (Parameter-Efficient Fine-Tuning) technique: LoRA (Low-Rank Adaptation of

Sorting Algorithms Made Visual – Feedback Needed!

Hey devs! 👋 I’ve built a React-based sorting algorithm visualizer to make learning sorting algorithms more interactive and engaging. I’d love to get your feedback and contributions! 🚀 🔗 Live Demo: Sorting Visualizer 🔗 GitHub Repo: GitHub Link 🌟 Features ✅ Supports multiple sorting algorithms (Bubble Sort, Selection Sort, etc.) ✅ Dynamic animations to illustrate how sorting works ✅ Adjustable speed and array size for better visualization ✅ Interactive UI for an engaging experience 💡 Why I Built This Sorting algorithms are fundamental in computer science, but understanding their step-by-step execution can be tricky. This project aims to make them