Hello again! It's been a very big week for the more protocol-brained among us here, since this week was the triannual IETF meeting in Montreal where two of my colleagues and many of our community members helped lead a session to take AT to the IETF. That session recording is now available for anyone wanting to catch up on the discussion and the plan for drafting parts of AT as IETF specifications -- see also this repo -- and this marks a really exciting next step towards maturing the governance of AT.
For the perhaps less protocol-brained among us (heaven forbid), this also makes for a good opportunity to think more about the distinction between core AT primitives and application-level features. For example, moderation is obviously an application-level feature, but that doesn't change the fact that our approach to moderation is very open, very scalable, and designed to be as generic as possible for others to adopt. The 2026 PyCon US CFP opened this week, which got me thinking about the Python-specific aspects of our moderation stack, which led me to this idea for a talk:
>> Bluesky is a decentralized social media application built on top of the AT Protocol. One way that Bluesky supports decentralization, and empowers users in the Atmosphere community to run their own unique AT apps, is through open source moderation tools. We do this through two primary parts of our stack: Osprey, an event stream decisions engine and analysis UI designed to investigate and take automatic action; and Ozone, a labeling service and web frontend for making moderation decisions.
>> Osprey is written in Python, and was designed and open sourced in collaboration with Discord. Osprey is a library for processing actions through human written rules and outputting labels, webhooks back to an API and other sinks. It evaluates events using structured logic, user-defined functions, and external signals to assign labels, verdicts, and actions. Some of these, such as our Toxrank model for toxicity detection, make use of fine-tuned LLMs and other classifiers that expose their own web endpoints to Osprey. We also utilize image OCR and hashing to create actionable moderation metadata, which can be automatically actioned, surfaced to moderators via Ozone, and in turn helps shape our Discover feed algorithms.
>> Although some of our internal heuristics are private, Osprey and Ozone are designed to be deployed and run by other app hosts with their own custom rules and moderation practices, whether they are reimplementing the Bluesky lexicon and featureset or running a different kind of Atmosphere app with a different set of content. In this talk, you’ll see a demo of both applications, learn about our rules engine and other architectural features, and leave with enough knowledge to integrate our open source moderation tools into your own stack.
Pretty neat, right? I hope the conference organizers agree! And for what it's worth, I think giving talks like this is a great way to combat a lot of the FUD surrounding LLM usage in this context. Let me explain what I mean by that. Something that's been impressed upon me since joining this team, that's present everywhere from our "you can just do things" approach to empowering developers, to our IETF standards work, to our moderation stack, is that we are really committed to the sustainability of our work as a first-order principle.
A lot of the lessons learned around "Big World" social apps in the last decade-plus (as opposed to deliberately small social networks) have to do with this sustainability. Sometimes, sustainability is a hard technical problem, as in the case of decentralization (though that requires motivating other communities to participate via infra too, so it has a big "human" element), and sometimes it's a people problem first, when it comes to keeping moderation accountable to individual communities, and helping that community moderation scale as much as it makes sense to avoid burning out.
The tools we're developing for moderation are trying to solve for exactly this: let other communities use them as they see fit, automate the signaling mechanisms that you need to grow, and provide plenty of people-facing tools to actually action those decisions. From an open source perspective, it can be especially challenging to solve these problems in the right order -- to prove that your app works and is credible for its non-technical and technical users alike -- and thanks to our contributors, I think we've managed to move far and fast over a lot of these early challenges. So here's to standardization, sustainability, and all the work to make that possible!