6 Comments
User's avatar
Matt Lubin's avatar

OMG I spent this past week trying to create exactly this but I fell down the rabbit hole of testing out various LLM-detectors. Too bad I didn't think to look for these much simpler approaches by Hassanein and Pandya. If I had known you'd be publishing `deslop` I'd have saved probably hundreds of thousands of Claude tokens πŸ˜…

Stephen D. Turner's avatar

Just another random Tuesday. I've surely torched millions of tokens doing something that's 99% identical to something that already exists.

Jenn's avatar

Not putting the Data Management Plans in this same bucket! πŸ˜… (Said as a Data Management Librarian.) Although now the NIH-funded researchers will only have to check Yes or No for most of the DMP, so they won't even need Claude for that anymore.

Stephen D. Turner's avatar

Hey Jenn! Thanks for reading! Came on too strong there. Not suggesting nobody should be writing these, just that *I* don't want to write them. I'll call you next time πŸŽ‰πŸ˜‰

Bedward's avatar

Interesting approach and thanks for sharing. I do wonder what the end game is and what it is accomplishing. Do you think the readers of these messages are screening for AI? We are rapidly approaching the case where AI is emailing AI, we can already do that if we want but I am not sure who and when it is happening. Obfuscating this with AI pretending to not write like AI seems like a step back.

Stephen D. Turner's avatar

To be clear, I'm not advocating for anything here, just sharing a tool I created for writing text I don't want to write that you don't want to read. Which of course begs the question why am I required to write text I don't want to write that you don't want to read. For those cases I'm perfectly content with outsourcing this to the AI du jour, and this is just to make it sound less like it came straight out of said AI.

Is this also a tool that would enable someone to cover their tracks and pass off AI writing as their own? Sure, but everyone's already doing this anyway.