Opinion The speedrun is one of the internet’s genuinely new artforms. At its best, it’s akin to a virtuoso piano recital. Less emotional depth, more adrenalin. Watching an expert fly through a game creates an endorphin rush without the expense or time of doing it for yourself.

Hell is other people … Thousands play same Pokemon game on TV. Mayhem ensues
FROM THE ARCHIVES
Speedruns can be enlightening, too. The obvious use case is watching closely for the solution to something that’s stumping you as a player. As opportunities to learn, speedruns closely resemble the debriefs that military aviators get after an exercise or which sports teams receive after matches. And now we have all sorts of video replay and analysis tools that can work in near-real time. The art form works for everyone, from solo casual gamers to the most highly trained and highly paid professionals there are. Could they work for, say, cyber security, where we need all the help we can get?
Oh yes. If you have a yen for keeping up with cyber security in the raw, you’ve almost certainly found the streamers and YouTubers who crack open a chunk of malware and strip it to its bones like expert butchers preparing a deer for the barbecue. They are not how-tos so much as they are demonstrations of techniques, both of the ethical hacker doing the dissection and of the malware creators. Double bubble.
Then one comes along that doesn’t just report from the front line, but illustrates some truths about creating better security through attention to attributes that don’t normally get talked about. In particular, the implications of open source being a more secure way to build software. More eyes on the code means fewer places to hide bad stuff, but like so much in FOSS, this doesn’t come for free. Code is never just code: it exists in an ecosystem of creators, users and concepts, of trust and suspicion, and it will be judged by people with a whole spectrum of expertise. If you’re writing an open source system utility, for example, your chance of widespread adoption depends on its reputation as trustworthy, and that will reflect on you.
Who watches the watchers?
Talon is a case in point. A Windows de-bloater made by an outfit called Raven and distributed through GitHub as open source, it nonetheless got a rep as potential malware. Open source by itself guarantees nothing, and the conversation around whether or not Talon’s bona fides checked out simply grew and grew. Enter YouTube cyber security educator and ethical hacker John Hammond. His day job includes answering the question “Is it Malware?” He has the chops, he has the tools, he has the caffeine. Speedrun is go.
The 39-minute long result is not a tutorial, and neither is this. It’s a demonstration of how he makes the calls of whether something that looks sketchy, as Talon’s innards may do, is actually harmful. Ultimately, he concludes, it is not.
Youtube Video
The reason so many people had their suspicions about the tool is that the stuff which looked sketchy to him also looked suspicious to various automated scanners that looked for typical tell-tale malware traits in code. Hammond, being human, knows to go past these and see what the logic and execution path actually does, and it always ends up de-bloating rather than detonating. It’s rather good at it, too, you should see how it shuts down Microsoft’s Edge browser forever.
If you watch him power through the Python and the Powershell, both in the source and the executable from GitHub, a pattern emerges: if you’re doing low-level, system-wide modifications to a Windows box, this is the nature of the beast, be it belligerent or benign.
For example, Talon has to turn off Defender, run as admin, override execution privileges, shell out to 15,000 lines of Powershell spaghetti, and so on. Moreover, the designers chose to have Talon download multiple external executable binaries, as well as build the whole thing with a tool that produces a packed binary by converting all the Python to C and thence to machine code. So while that’s not obfuscation, it smells like it.
In each case, Hammond shows why there’s no malice hidden away — probably. As he notes, the amount of detailed analysis you do depends on how you set the dials on your personal threat level comfort.
How might Raven have avoided being considered suspicious? There’s a concept called defensive coding, where you consider each decision not just as how it contributes to functionality, but how it would code if given an unexpected input. With Talon, the defensive process is whether a choice of technique will trigger malware scanners, and if it might, but is indispensable, how to make it clear in the code what’s going on. You know, that pesky documentation stuff. The design overview. The comments in the code. If your product will need all those open source eyeballs to become trusted, then feed those eyeballs with what they need. There aren’t many Hammonds, but there are lots of curious wannabes, and even the occasional journalist eager to tell a story.
Creating security is a huge task, and everyone who launches software for the masses has the opportunity to help or hinder, regardless of the actual intent of the product. Open source is a magnificent path to greater security across the board, because it keeps humans in the loop. Engineering for those humans is a force amplifier for good. Just ask the future historians speedrunning the history of cyber security centuries from now. ®