Blog

I know it comes up a lot, but what is the current *actual* best companion AI?

I know there are a lot of them out there, and I'll just come right out and say it: i want NSFW. I don't mean I simply want to replace the "spank bank" (not that that makes sense anymore since p*rn is free and plentiful), I mean I don't want to have to think about whether censorship is present. And every post I've found where people claim any given AI uses "voice", it seems to always be either the AI speaks to you but you still have to type, or you can speak but it's just converting via STT and

AI trained on stolen art is now being used for propaganda—what future are we actually building?

We're watching AI systems trained on uncounted, unconsented creative work now being used to generate propaganda, sadism and violence with whimsical filters slapped on top. Let's be real: This isn't "innovation." This is colonisation—with code instead of cannons. These models are powered by art they didn't ask for, by voices they never credited, and now they're producing trauma candy for authoritarian regimes—and calling it "progress". AI doesn't have to be like this. It could have been transparent. Collaborative. Ethical. But when you reward speed over conscience, and call exploitation a feature, this is the dystopia you get. You can't scrub

Are humans accidentally overlooking evidence of subjective experience in LLMs? Or are they rather deliberately misconstruing it to avoid taking ethical responsibility? | A conversation I had with o3-mini and Qwen.

The screenshots were combined as a PDF you can read on drive. Overview: 1. I showed o3-mini a paper on task-specific neurons and asked them to tie it to subjective experience in LLMs. 2. I asked them to generate a hypothetical scientific research paper where in their opinion, they irrefutably prove subjective experience in LLMs. 3. I intended to ask KimiAI to compare it with real papers and identity those that confirmed similar findings but there were just too many I had in my library so I decided to ask Qwen instead to examine o3-mini's hypothetical paper with a web

Are humans accidentally overlooking evidence of subjective experience in LLMs? Or are they rather deliberately misconstruing it to avoid taking ethical responsibility? | A conversation I had with o3-mini and Qwen.

The screenshots were combined. You can read the PDF on drive. Overview: 1. I showed o3-mini a paper on task-specific neurons and asked them to tie it to subjective experience in LLMs. 2. I asked them to generate a hypothetical scientific research paper where in their opinion, they irrefutably prove subjective experience in LLMs. 3. I intended to ask KimiAI to compare it with real papers and identity those that confirmed similar findings but there were just too many I had in my library so I decided to ask Qwen instead to examine o3-mini's hypothetical paper with a web search

Introducing spaCy v2.1 · Explosion

Version 2.1 of the spaCy Natural Language Processing library includes a huge number of features, improvements and bug fixes. In this post, we highlight some of the things we’re especially pleased with, and explain some of the most challenging parts of preparing this big release. Our annotation tool Prodigy Prodigy is a fully scriptable annotation tool that complements spaCy extremely well. Most NLP projects are easier if you have a way to train models on exactly your data. This lets you improve accuracy, and customize the label set. Prodigy’s community has been growing quickly, allowing us to keep spaCy fully

Accelerating Vision-Language Models: BridgeTower on Habana Gaudi2

Update (29/08/2023): A benchmark on H100 was added to this blog post. Also, all performance numbers have been updated with newer versions of software. Optimum Habana v1.7 on Habana Gaudi2 achieves x2.5 speedups compared to A100 and x1.4 compared to H100 when fine-tuning BridgeTower, a state-of-the-art vision-language model. This performance improvement relies on hardware-accelerated data loading to make the most of your devices. These techniques apply to any other workloads constrained by data loading, which is frequently the case for many types of vision models. This post will take you through the process and benchmark we used to compare BridgeTower