Articles for category: AI Tools

Stable Diffusion XL on Mac with Advanced Core ML Quantization

Stable Diffusion XL was released yesterday and it’s awesome. It can generate large (1024×1024) high quality images; adherence to prompts has been improved with some new tricks; it can effortlessly produce very dark or very bright images thanks to the latest research on noise schedulers; and it’s open source! The downside is that the model is much bigger, and therefore slower and more difficult to run on consumer hardware. Using the latest release of the Hugging Face diffusers library, you can run Stable Diffusion XL on CUDA hardware in 16 GB of GPU RAM, making it possible to use it

Emerging Trends in Java Garbage Collection

Efficient garbage collection is essential to Java application performance. What worked well in 1995, when Java was first released, won’t cope with the high demands of modern computing. To stay ahead of the game, you need to make sure you’re using the best GC algorithm for your application. In this article, we’ll look at evolving trends in Java garbage collection, and take a quick look at what’s planned for the future. Computing Trends Any software that stands the test of time must evolve to keep up with technology. To understand how and why GC has changed over the years, we

Episode #139 f”Yes!” for the f-strings

Sponsored by DigitalOcean: pythonbytes.fm/digitalocean Special guest: Ines Montani Brian #1: Simplify Your Python Developer Environment Contributed by Nils de Bruin “Three tools (pyenv, pipx, pipenv) make for smooth, isolated, reproducible Python developer and production environments.” The tools: pyenv – install and manage multiple Python versions and flavors pipx – install a Python application with it’s own virtual environment for use globally pipenv – managing virtual environments, dependencies, on a per project basis Brian note: I’m not sold on any of these yet, but honestly haven’t given them a fair shake either, but also didn’t really know how to try them

Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny

In recent times, the AI community has witnessed a remarkable surge in the development of larger and more performant language models, such as Falcon 40B, LLaMa-2 70B, Falcon 40B, MPT 30B, and in the imaging domain with models like SD2.1 and SDXL. These advancements have undoubtedly pushed the boundaries of what AI can achieve, enabling highly versatile and state-of-the-art image generation and language understanding capabilities. However, as we marvel at the power and complexity of these models, it is essential to recognize a growing need to make AI models smaller, efficient, and more accessible, particularly by open-sourcing them. At Segmind,

useState Overload? How useReducer Can Make Your React State Management Easier

Introduction If you’ve been building React applications for a while, you’ve likely reached for useState as your go-to state management tool. It’s simple, intuitive, and works great for basic scenarios. But what happens when your component’s state logic grows complex? Nested useState calls, intertwined updates, and messy handlers can quickly turn your clean code into a tangled mess. That’s where useReducer comes in—a more scalable and maintainable alternative for managing complex state. In this article, we’ll explore when and why you should consider switching from useState to useReducer. 1. The Problem with useState in Complex Scenarios ✅ useState works well

PyDev of the Week: Ines Montani

This week we welcome Ines Montani (@_inesmontani) as our PyDev of the Week! Ines is the Founder of Explosion AI and a core developer of the spaCy package, which is a Python package for Natural Language Processing. If you would like to know more about Ines, you can check out her website or her Github profile. Let’s take a few moments to get to know her better! Can you tell us a little about yourself (hobbies, education, etc): Hi, I’m Ines! I pretty much grew up on the internet and started making websites when I was 11. I remember sitting

Practical 3D Asset Generation: A Step-by-Step Guide

Generative AI has become an instrumental part of artistic workflows for game development. However, as detailed in my earlier post, text-to-3D lags behind 2D in terms of practical applicability. This is beginning to change. Today, we’ll be revisiting practical workflows for 3D Asset Generation and taking a step-by-step look at how to integrate Generative AI in a PS1-style 3D workflow. Why the PS1 style? Because it’s much more forgiving to the low fidelity of current text-to-3D models, and allows us to go from text to usable 3D asset with as little effort as possible. Prerequisites This tutorial assumes some basic

React 19 Memoization: Is useMemo & useCallback No Longer Necessary?

React 19 Brings Automatic Optimization For years, React developers have relied on useMemo and useCallback to optimize performance and prevent unnecessary re-renders. However, React 19 introduces a game-changing feature: the React Compiler, which eliminates the need for manual memoization in most cases. In this article, we’ll explore how memoization worked before React 19, how the new React Compiler optimizes performance, and when (if ever) you still need useMemo and useCallback. The Problem with Manual Memoization What is Memoization in React? Memoization is a performance optimization technique that caches the results of expensive function calls, preventing redundant calculations when the same

Fine-tune BERT, XLNet and GPT-2 · Explosion

Huge transformer models like BERT, GPT-2 and XLNet have set a new standard for accuracy on almost every NLP leaderboard. You can now use these models in spaCy, via a new interface library we’ve developed that connects spaCy to Hugging Face’s awesome implementations. In this post we introduce our new wrapping library, spacy-transformers. It features consistent and easy-to-use interfaces to several models, which can extract features to power your NLP pipelines. Support is provided for fine-tuning the transformer models via spaCy’s standard nlp.update training API. The library also calculates an alignment to spaCy’s linguistic tokenization, so you can relate the