Articles for category: AI Tools

Ethical Considerations and Best Practices in LLM Development 

Bias is inherent to building a ML model. Bias exists on a spectrum. Our job is to tell the difference between the desirable bias and the one that needs correction. We can identify biases using benchmarks like StereoSet and BBQ, and minimize them with ongoing monitoring across versions and iterations. Adhering to data protection laws is not as complex if we focus less on the internal structure of the algorithms and more on the practical contexts of use. To keep data secure throughout the model’s lifecycle, implement these practices: data anonymization, secure model serving and privacy penetration tests. Transparency can

Announcing Databricks’ Offer for Games Startups

Databricks is excited to announce an expansion to our startup offer, providing game studios access to free credits, expert advice and a data and AI ecosystem that can rally behind you. Our goal is to help you make the best games possible for your community, bringing more play to the world! Summary Here’s a summary of what game studios receive as part of this offer: $50,000 worth of Databricks credits to get you going Complimentary technical support for questions, breakfixes and tuning Access to Databricks technical resources to help guide you on your journey 5 free licenses for Sigma Computing’s

Changelog · Prodigy · An annotation tool for AI, Machine Learning & NLP

Select page…Get Started › Prodigy 101Get Started › Installation & SetupGet Started › ChangelogUsage › Named Entity RecognitionUsage › Span CategorizationUsage › Text ClassificationUsage › Dependencies & RelationsUsage › Computer VisionUsage › Audio & VideoUsage › Task RoutingUsage › Large Language ModelsUsage › ReviewUsage › Custom RecipesUsage › Custom InterfacesUsage › MetricsUsage › DeploymentAPI › RecipesAPI › Annotation InterfacesAPI › Web ApplicationAPI › Loaders & Input DataAPI › Components & FunctionsAPI › DatabasePlugins › Open Source PluginsPlugins › Single Sign-onPlugins › Modal This page lists the history of changes to Prodigy. Whenever a new update is available, you’ll receive an

You can now fine-tune open-source video models

AI video generation has gotten really good. Some of the best video models like tencent/hunyuan-video are open-source, and the community has been hard at work building on top of them. We’ve adapted the Musubi Tuner by @kohya_tech to run on Replicate, so you can fine-tune HunyuanVideo on your own visual content. Never Gonna Give You Up animal edition, courtesy of @flngr and @fofr. HunyuanVideo is good at capturing the style of the training data, not only in the visual appearance of the imagery and the color grading, but also in the motion of the camera and the way the characters

How to Fine-Tune a FLUX Model in under an hour with AI Toolkit and a DigitalOcean H100 GPU

FLUX has been taking the internet by storm this past month, and for good reason. Their claims of superiority to models like DALLE 3, Ideogram, and Stable Diffusion 3 have proven well founded. With capability to use the models being added to more and more popular Image Generation tools like Stable Diffusion Web UI Forge and ComyUI, this expansion into the Stable Diffusion space will only continue. Since the model’s release, we have also seen a number of important advancements to the user workflow. These notably include the release of the first LoRA (Low Rank Adaptation models) and ControlNet models

Top 7 meeting intelligence platforms in 2025

Companies lose millions to unproductive meetings with executives spending up to 23 hours per week in conversations that often lead to more questions than answers. Still, the real cost isn’t just time—it’s the valuable insights, decisions, and action items that slip through the cracks between meetings. Meeting intelligence platforms aim to change this paradigm. They combine advanced speech recognition, natural language processing, and conversation analytics to turn routine meetings into searchable data that drives better business outcomes. They’re not just recording conversations. They’re unlocking patterns, surfacing insights, and turning meetings into a strategic asset. However, finding the right platform for

LangGraph 0.3 Release: Prebuilt Agents

By Nuno Campos and Vadym Barda Over the past year, we’ve invested heavily in making LangGraph the go-to framework for building AI agents. With companies like Replit, Klarna, LinkedIn and Uber choosing to build on top of LangGraph, we have more conviction than ever that we are on the right path. A core principle of LangGraph is to be as low level as possible. There are no hidden prompts or no enforced “cognitive architectures” in LangGraph. This has served to make it production ready and also distinguishes itself from all other frameworks. At the same, we do see the value

Trace & Evaluate your Agent with Arize Phoenix

So, you’ve built your agent. It takes in inputs and tools, processes them, and generates responses. Maybe it’s making decisions, retrieving information, executing tasks autonomously, or all three. But now comes the big question – how effectively is it performing? And more importantly, how do you know? Building an agent is one thing; understanding its behavior is another. That’s where tracing and evaluations come in. Tracing allows you to see exactly what your agent is doing step by step—what inputs it receives, how it processes information, and how it arrives at its final output. Think of it like having an

Animals Crossing: AI Helps Protect Wildlife Across the Globe

From Seattle, Washington, to Cape Town, South Africa — and everywhere around and between — AI is helping conserve the wild plants and animals that make up the intricate web of life on Earth. It’s critical work that sustains ecosystems and supports biodiversity at a time when the United Nations estimates over 1 million species are threatened with extinction. World Wildlife Day, a UN initiative, is celebrated every March 3 to recognize the unique contributions wild animals and plants have on people and the planet — and vice versa. “Our own survival depends on wildlife,” the above video on this

Magma: A foundation model for multimodal AI agents across digital and physical worlds

Imagine an AI system capable of guiding a robot to manipulate physical objects as effortlessly as it navigates software menus. Such seamless integration of digital and physical tasks has long been the stuff of science fiction.   Today, Microsoft researchers are bringing that vision closer to reality with Magma (opens in new tab), a multimodal AI foundation model designed to process information and generate action proposals across both digital and physical environments. It is designed to enable AI agents to interpret user interfaces and suggest actions like button clicks, while also orchestrating robotic movements and interactions in the physical world.