Articles for category: AI Tools

Introducing the LangGraph Functional API

Have you ever wanted to take advantage of LangGraph’s core features like human-in-the-loop, persistence/memory, and streaming without having to explicitly define a graph? We’re excited to announce the release of the Functional API for LangGraph, available in Python and JavaScript. The functional API allows you to leverage LangGraph features using a more traditional programming paradigm, making it easier to build AI workflows that incorporate human-in-the-loop interactions, short-term and long-term memory, and streaming capabilities. The Functional API consists of two decorators — entrypoint and task — which allow you to define workflows using standard functions, and use regular loops and conditionals

Everything OpenAI Launched at DevDay

Was this newsletter forwarded to you? Sign up to get it in your inbox. I’m at OpenAI’s developer conference, DevDay, today in San Francisco. Here’s what I saw. The big news is that the company launched a Realtime API that promises to allow anyone to build functionality similar to ChatGPT’s Advanced Voice Mode within their own app. Paired with their new model o1, released a few weeks ago, OpenAI is creating an new way to build software. o1 can prototype anything you have in your head in minutes rather than months. And the Realtime API enables developers to build software

Something Is Rotten in the State of Cupertino

Something Is Rotten in the State of Cupertino. John Gruber’s blazing takedown of Apple’s failure to ship many of the key Apple Intelligence features they’ve been actively promoting for the past twelve months. The fiasco here is not that Apple is late on AI. It’s also not that they had to announce an embarrassing delay on promised features last week. Those are problems, not fiascos, and problems happen. They’re inevitable. […] The fiasco is that Apple pitched a story that wasn’t true, one that some people within the company surely understood wasn’t true, and they set a course based on

Documentation enhancements for Deephaven Community

Deephaven’s API continues to grow with each release, as does the base of documentation supporting it. In addition to documenting brand-new features, we continue to revise and expand existing documentation. Read on to learn about some of the most significant recent changes to the Deephaven documentation. Documentation of brand-new features includes: We’ve added a new section for Community Questions. Here, you’ll find answers from the Deephaven team to frequently asked user questions. If you have a question, feel free to ask us on Slack. New additions to the Deephaven user guide include: New reference documents include: Deephaven’s documentation team has

Data Machina #250 – Data Machina

Llama 3: A Watershed AI moment? I reckon that the release of Llama 3 is perhaps one of the most important moments in AI development so far. The Llama 3 stable is already giving birth to all sorts of amazing animals and model derivatives. You can expect Llama 3 will unleash the mother of all battles against closed AI models like GPT-4. Meta AI just posted: ”Our largest Llama 3 models are over 400B parameters. And they are still being trained.” The upcoming Llama-400B will change the playing field for many independent researchers, little AI startups, one-man AI developers, and

Netflix PRS 2024 – Applying LLMs to Recommendation Experiences

Recently, I was invited to speak at the 2024 Netflix Workshop on Personalization, Recommendation, and Search. I shared about the challenges faces while building and deploying LLM-powered recommendation experiences at consumer scale. It was an enlightening conference that covered a range of topics from LLMs to recsys to measurement, and it was a fun opportunity to catch up with old friends in San Francisco. I shared some observations here and here. Here’s the full list of topics and speakers. LLMs as Agents and How to Evaluate Them on Real-World Tasks (Alane Suhr, Assistant Professor, UC Berkeley) Applying Language Models to

소프트웨어 엔트로피 – DEV Community

《실용주의 프로그래머(The Pragmatic Programmer)》 2장을 읽고 정리한 내용입니다. 소프트웨어도 시간이 지나면 점점 엔트로피(무질서도)가 증가한다. 우주가 끊임없이 무질서해지는 것처럼, 유지보수가 제대로 되지 않은 소프트웨어는 점점 더 엉망이 된다. 문제는 단순히 기술적인 요소만이 아니라 심리적, 문화적 요인도 소프트웨어의 부패를 가속화한다는 것이다. 깨진 창문 이론 이 책에서는 깨진 창문 이론(Broken Windows Theory)을 소프트웨어 개발에 적용해 설명한다. 건물에 깨진 창문 하나가 오랫동안 방치되면, 거주자들은 그 건물이 버려졌다고 느낀다. 그러면 다른 창문도 깨지기 시작하고, 결국 건물 전체가 황폐해진다. 이 이론은 뉴욕 경찰이 경범죄를 단속함으로써 강력범죄까지 줄이는 데 도움을 줬다. 마찬가지로, 개발에서도 작은 문제를 방치하면 코드 전체가 무너질 위험이 있다. Tip 4: 깨진 창문을 내버려두지 말라

Introducing New Governance Capabilities to Scale AI Agents with Confidence

As we mentioned in our blog earlier this week, AI agents require enterprise data integration and output governance to achieve production quality. Today we’re launching updates to Mosaic AI Gateway, Unity Catalog tools, and AI/BI Genie that enable organizations to build production-ready AI agents with robust governance and data integration capabilities. Here’s why this matters: Imagine your developers build an AI agent that summarizes high-priority customer complaints and alerts departments via Slack. Without high quality performance, departments could be flooded with noisy alerts, causing them to miss truly urgent matters. Even worse, if developers receive direct access to Slack API