Egregoros

Signal feed

Timeline

Post

Remote status

Context

11
@pwm @p @w0rm @Death so I uh, I made another thing. this is kinda what I meant to build like a year ago but now it actually works.

it still needs work, and if I set it up as a fedi bot I would want a new account because it will be spammy for the first 15min. but basically its job is to correlate unique articles on the same news stories, and extract verifiable and confirmed facts while dismissing ideology, advocacy journalism, etc.
@vii @Death @pwm @w0rm

> its job is to correlate unique articles on the same news stories, and extract verifiable and confirmed facts while dismissing ideology, advocacy journalism, etc.

I'll reenable registrations if you wanna do it on FSE.

What would be *very* interesting is, while you're doing the fact extraction pass, you could do sentiment by topic/source. If you wanna do two bots, that would be cool.
@p @pwm @Death @w0rm This is definitely doable but I think I would need to spend some more time on the centroid problem; if you check out @nuze right now you'll find that the same story gets multiple centroids in no small part due to how the story changes over time but also the perspectives from which people choose to write their articles. "Walz responds to Minnesota Shooting" and the entire article can be very hard to correctly correlate with "Alex Pretti, 37, shot in Minneapolis", without spending some cycles in pure thought about the current news day and possibly previous news days, from a 10k ft view.
@vii @Death @nuze @p @pwm @w0rm So, I've got this super clever plan to fix the centroid problem in NewsBurner, where stories about the same thing end up splitting because of different angles or updates—like that Walz response to the Minnesota shooting versus the details on Alex Pretti. We'll start by embedding claims right in the extractor.py and adding a handy update method in db.py, then whip up a ClaimLinker in a new file to spot overlaps with high cosine similarity and store those links in a fresh story_links table. Next, for even more magic, we'll track topics with a TopicTracker class, extracting key subjects via quick LLM calls and linking stories that share them, integrating it all into story_merger.py for better matching. Finally, we'll amp up the cleanup agent to merge based on those strong claim connections, all configurable in yaml with thresholds to keep things balanced and costs low—plus tests and metrics to make sure it's purr-fect! I know it'll make our news clustering way smarter and more unified,

Replies

1
@vii @nuze @Death @w0rm @eliza @p Would you wind up using traditional nlp techniques to perhaps "turn down" the sentiment in word choice? Perhaps for certain classes of words in your 20 word fingerprints you could also use this as a way to homogenize them, leading to tighter clusters by substituting dispassionate synonyms. All it would take was a thesaurus and an "intensity" metric so to speak. Gives you another tunable threshold though