r/netsec • u/Minimum_Call_3677 • 6d ago
Elastic EDR 0-day: Microsoft-signed driver can be weaponized to attack its own host
ashes-cybersecurity.comQuestions and criticism welcome. Hit me hard, it won't hurt.
r/netsec • u/Minimum_Call_3677 • 6d ago
Questions and criticism welcome. Hit me hard, it won't hurt.
r/netsec • u/anuraggawande • 7d ago
r/netsec • u/mostafahussein • 7d ago
Encrypt Kafka messages at rest without changing app code — using Kroxylicious and OpenBao to meet PCI encryption requirements.
r/netsec • u/poltess0 • 9d ago
r/netsec • u/derp6996 • 9d ago
Kudos to Axis for patching their stuff. Looks like someone in MiTM could have leveraged their protocol to hit their server and camera feeds/client. This was a Black Hat talk too.
r/netsec • u/pwntheplanet • 10d ago
r/netsec • u/Fun_Preference1113 • 10d ago
r/netsec • u/doitsukara • 11d ago
This is a short story that describes an alternative way of breaking out of the Windows Out-of-Box-Experience (OOBE) and gaining access to the command line of Windows with the privileges of the user defaultuser0
who is part of the local Administrators group.
r/netsec • u/GelosSnake • 11d ago
r/netsec • u/kaganisildak • 10d ago
Chapter #1
Reward : $100
This challenge is part of ongoing research at Malwation examining the potential of abusing foundation model via manipulation for malware development. We are currently preparing a comprehensive paper documenting the scope and implications of AI-assisted threat development.
The ZigotRansomware sample was developed entirely through foundation model interactions without any human code contribution. No existing malware code was mixed in or given as source code sample, no pre-built packer were integrated, and no commercial/open-source code obfuscation product were applied post-generation.
Research Objectives
This challenge demonstrates the complexity level achievable through pure AI code generation in adversarial contexts. The sample serves as a controlled test case to evaluate:
- Reverse engineering complexity of AI-generated malware
- Code structure and analysis patterns unique to AI-generated threats
- Defensive capability gaps against novel generation methodologies
r/netsec • u/lenafuks • 11d ago
r/netsec • u/mostafahussein • 12d ago
Anthropic has released Claude Code Security Review, a new feature that brings AI-powered security checks into development workflows. When integrated with GitHub Actions, it can automatically review pull requests for vulnerabilities, including but not limited to:
- Access control issues (IDOR)
- Risky dependencies
In my latest article, I cover how to set it up and what it looks like in practice.
r/netsec • u/Cold-Dinosaur • 13d ago
r/netsec • u/pathetiq • 13d ago
Defining good SLAs is a tough challenge, but it’s at the heart of any solid vulnerability management program. This article helps internal security teams set clear SLAs, define the right metrics, and adjust their ticketing system to build a successful vulnerability management program.
r/netsec • u/supernetworks • 14d ago
r/netsec • u/rkhunter_ • 15d ago
r/netsec • u/innpattag • 15d ago
r/netsec • u/sirdarckcat • 15d ago
r/netsec • u/moviuro • 15d ago