Thoughts on the 2025 OWASP Top 10 for LLMs release
Attention Conservation Notice A lightly-edited copy of a bluesky thread – originally here – collecting some thoughts about the OWASP Top 10 for LLMs.
Security of AI, AI for Security.
Attention Conservation Notice A lightly-edited copy of a bluesky thread – originally here – collecting some thoughts about the OWASP Top 10 for LLMs.
Attention Conservation Notice An old and barely-edited Twitter thread (so please forgive choppy writing and any formatting glitches) that I rescued from the memory hole. Has my own tips and advice for troubleshooting deep learning models when they don’t want to train. As we’ve moved away from hand-written training loops and people making up new and wierd architectures specific to their particular problem, and just throw LLMs/transformers at them, these are probably less relevant.
Attention Conservation Notice This is just a barely edited copy of an old twitter thread that collects a lot of my thoughts on deciding if, when, and how to use deep learning (which is why it reads so choppy). I think it’s advice that holds up, but given the rapid growth in the number of people who have applied deep learning to practical problems, I suspect a lot of this has graduated to “common knowledge” since I wrote it.
Attention Conservation Notice Notes on a paper that asks (of a particular kind of model pruning) “should start where we plan to end up, and just train the pruned architecture from scratch?” The answer turns out to be that – once you adjust for total number of FLOPs – just starting with the smaller model with random weights generally works fine. Plus other interesting observations about the structure-vs-initialization question.
Attention Conservation Notice : Links and short commentary from recent news on facial recognition, mostly so I can track it and my reactions to it later; if you’re on twitter you’ve probably seen them.
Attention Conservation Notice : Slide decks from my two talks; if you didn’t see my talks these will likely be of limited interest.
Attention Conservation Notice : In response to a twitter thread, this is a short list of papers from my Zotero library related to the intersection of ML/data science and network security, grouped into vague categories reflecting my memory of the paper. Probably at least a little dated.
Attention Conservation Notice My notes on a paper that builds on previous efforts to apply neural machine translation (NMT) to decompiling binaries back into human-readable source code.
They focus on compiler-specific translation pairs, which allows them to use the compiler they target to a) act as an oracle, b) generate more source-translation pairs as needed, and c) generate error-specific training samples when their decompiler makes a mistake.